# renom.layers.activation ¶

class  renom.layers.activation.elu.   Elu  ( alpha=0.01 )

The Exponential Linear Units [elu] activation function is described by the following formula:

f(x)=max(x, 0) + alpha*min(exp(x)-1, 0)
 Parameters: x ( ndarray , Variable ) – Input numpy array or instance of Variable. alpha ( float ) – Coefficient multiplied by exponentiated values.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.elu(x)
elu([[ 1.  , -0.00632121]])

>>> # instantiation
>>> activation = rm.Elu()
>>> activation(x)
elu([[ 1.  , -0.00632121]])

 [elu] Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter (2015). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). Published as a conference paper at ICLR 2016
class  renom.layers.activation.leaky_relu.   LeakyRelu  ( slope=0.01 )

The Leaky relu [leaky_relu] activation function is described by the following formula:

f(x)=max(x, 0)+min(slope*x, 0)
 Parameters: x ( ndarray , Variable ) – Input numpy array or instance of Variable. slope ( float ) – Coefficient multiplied by negative values.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.leaky_relu(x, slope=0.01)
leaky_relu([[ 1.  , -0.01]])

>>> # instantiation
>>> activation = rm.LeakyRelu(slope=0.01)
>>> activation(x)
leaky_relu([[ 1.  , -0.01]])

 [leaky_relu] Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng (2014). Rectifier Nonlinearities Improve Neural Network Acoustic Models
class  renom.layers.activation.relu.   Relu 

Rectified Linear Unit activation function as described by the following formula.

f(x)=max(x, 0)
 Parameters: x ( ndarray , Node ) – Input numpy array or Node instance.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.relu(x)
relu([[ 1.  , 0.]])

>>> # instantiation
>>> activation = rm.Relu()
>>> activation(x)
relu([[ 1.  , 0]])

class  renom.layers.activation.relu6.   Relu6 

Rectified Linear Unit (6) activation function as described by the following formula.

f(x)=min(6,max(x, 0))
 Parameters: x ( ndarray , Node ) – Input numpy array or Node instance.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[7, 1, -1]])
array([[7, 1, -1]])
>>> rm.relu6(x)
relu([[0.  ,1.  , 0.]])

>>> # instantiation
>>> activation = rm.Relu6()
>>> activation(x)
relu([[0.  ,1.  , 0.]])

class  renom.layers.activation.selu.   Selu 

The scaled exponential linear unit [selu] activation function is described by the following formula:

a = 1.6732632423543772848170429916717 b = 1.0507009873554804934193349852946 f(x) = b*max(x, 0)+min(0, exp(x) - a)
 Parameters: x ( ndarray , Node ) – Input numpy array or Node instance.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.relu(x)
selu([ 1.05070102, -1.11133075])

>>> # instantiation
>>> activation = rm.Relu()
>>> activation(x)
selu([ 1.05070102, -1.11133075])

 [selu] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Learning (cs.LG); Machine Learning
class  renom.layers.activation.sigmoid.   Sigmoid 

Sigmoid activation function as described by the following formula.

f(x) = 1/(1 + \exp(-x))
 Parameters: x ( ndarray , Node ) – Input numpy array or Node instance.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1., -1.])
>>> rm.sigmoid(x)
sigmoid([ 0.7310586 ,  0.26894143])

>>> # instantiation
>>> activation = rm.Sigmoid()
>>> activation(x)
sigmoid([ 0.7310586 ,  0.26894143])

class  renom.layers.activation.softmax.   Softmax 

Soft max activation function is described by the following formula:

f(x_j)=\frac{exp(x_j)}{\sum_{i}exp(x_i)}
 Parameters: x ( ndarray , Variable ) – Input numpy array or instance of Variable.

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.random.rand(1, 3)
array([[ 0.11871966  0.48498547  0.7406374 ]])
>>> z = rm.softmax(x)
softmax([[ 0.23229694  0.33505085  0.43265226]])
>>> np.sum(z, axis=1)
array([ 1.])

>>> # instantiation
>>> activation = rm.Softmax()
>>> activation(x)
softmax([[ 0.23229694  0.33505085  0.43265226]])

class  renom.layers.activation.tanh.   Tanh 

Hyperbolic tangent activation function as described by the following formula.

f(x) = tanh(x)
 Parameters: x ( ndarray , Node ) – Input numpy array or Node instance.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1., -1.])
>>> rm.tanh(x)
tanh([ 0.76159418, -0.76159418])

>>> # instantiation
>>> activation = rm.Tanh()
>>> activation(x)
tanh([ 0.76159418, -0.76159418])