renom.layers.activation

class renom.layers.activation.elu. Elu ( alpha=0.01 )

以下の式で表されるExponential Linear Units活性化関数 [elu] を定義したクラス.

f(x)=max(x, 0) + alpha*min(exp(x)-1, 0)
パラメータ:

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.elu(x)
elu([[ 1.  , -0.00632121]])
>>> # instantiation
>>> activation = rm.Elu()
>>> activation(x)
elu([[ 1.  , -0.00632121]])
[elu] Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter (2015). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). Published as a conference paper at ICLR 2016
class renom.layers.activation.leaky_relu. LeakyRelu ( slope=0.01 )

以下の式で表される leaky relu [leaky_relu] 活性化関数を定義したクラス.

f(x)=max(x, 0)+min(slope*x, 0)
パラメータ:

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.leaky_relu(x, slope=0.01)
leaky_relu([[ 1.  , -0.01]])
>>> # instantiation
>>> activation = rm.LeakyRelu(slope=0.01)
>>> activation(x)
leaky_relu([[ 1.  , -0.01]])
[leaky_relu] Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng (2014). Rectifier Nonlinearities Improve Neural Network Acoustic Models
class renom.layers.activation.relu. Relu

以下の式で表されるrelu活性化関数を定義したクラス.

f(x)=max(x, 0)
パラメータ: x ( ndarray , Node ) -- 入力データ

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.relu(x)
relu([[ 1.  , 0.]])
>>> # instantiation
>>> activation = rm.Relu()
>>> activation(x)
relu([[ 1.  , 0]])
class renom.layers.activation.relu6. Relu6

以下の式で表されるrelu(6)活性化関数を定義したクラス.

f(x)=min(6,max(x, 0))
パラメータ: x ( ndarray , Node ) -- 入力データ

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[7, 1, -1]])
array([[7, 1, -1]])
>>> rm.relu6(x)
relu([[0.  ,1.  , 0.]])
>>> # instantiation
>>> activation = rm.Relu6()
>>> activation(x)
relu([[0.  ,1.  , 0.]])
class renom.layers.activation.selu. Selu

以下の式で表される scaled exponential linear unit [selu] 活性化関数を定義したクラス.

a = 1.6732632423543772848170429916717 b = 1.0507009873554804934193349852946 f(x) = b*max(x, 0)+min(0, exp(x) - a)
パラメータ: x ( ndarray , Node ) -- 入力データ

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.array([[1, -1]])
array([[ 1, -1]])
>>> rm.relu(x)
selu([ 1.05070102, -1.11133075])
>>> # instantiation
>>> activation = rm.Relu()
>>> activation(x)
selu([ 1.05070102, -1.11133075])
[selu] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Learning (cs.LG); Machine Learning
class renom.layers.activation.sigmoid. Sigmoid

sigmoid [2]_ 活性化関数を定義したクラス.

f(x) = 1/(1 + \exp(-x))
パラメータ: x ( ndarray , Node ) -- 入力データ

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1., -1.])
>>> rm.sigmoid(x)
sigmoid([ 0.7310586 ,  0.26894143])
>>> # instantiation
>>> activation = rm.Sigmoid()
>>> activation(x)
sigmoid([ 0.7310586 ,  0.26894143])
class renom.layers.activation.softmax. Softmax

sigmoid [2]_ 活性化関数を定義したクラス.

f(x_j)=\frac{exp(x_j)}{\sum_{i}exp(x_i)}
パラメータ: x ( ndarray , Variable ) -- 入力データ

Example

>>> import renom as rm
>>> import numpy as np
>>> x = np.random.rand(1, 3)
array([[ 0.11871966  0.48498547  0.7406374 ]])
>>> z = rm.softmax(x)
softmax([[ 0.23229694  0.33505085  0.43265226]])
>>> np.sum(z, axis=1)
array([ 1.])
>>> # instantiation
>>> activation = rm.Softmax()
>>> activation(x)
softmax([[ 0.23229694  0.33505085  0.43265226]])
class renom.layers.activation.tanh. Tanh

tanh 活性化関数を定義したクラス.

f(x) = tanh(x)
パラメータ: x ( ndarray , Node ) -- 入力データ

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1., -1.])
>>> rm.tanh(x)
tanh([ 0.76159418, -0.76159418])
>>> # instantiation
>>> activation = rm.Tanh()
>>> activation(x)
tanh([ 0.76159418, -0.76159418])