renom ¶
-
class
renom.core.
Grads
( root=None , weight_decay=None ) ¶ -
Bases:
object
Grads class. This class contains gradients of each Node object.
When the function
grad
which is a method of Node class is called, an instance of Grads class will be returned.For getting the gradient with respect to any Variable object ‘x’ which is on a computational graph, call the ‘get’ function of Grads object.
Example
>>> import numpy as np >>> import renom as rm >>> a = rm.Variable(np.random.rand(2, 3)) >>> b = rm.Variable(np.random.rand(2, 3)) >>> c = rm.sum(a + 2*b) >>> grad = c.grad() >>> grad.get(a) # Getting gradient of a. Mul([[ 1., 1., 1.], [ 1., 1., 1.]], dtype=float32) >>> grad.get(b) RMul([[ 2., 2., 2.], [ 2., 2., 2.]], dtype=float32)
-
get
( node , default=<object object> ) ¶ -
This function returns the gradient with respect to the given node. In the case of that there isn’t the gradient of given node, this function returns ‘None’.
Parameters: Returns: Gradient of given node object or object given to argument default.
Return type:
-
update
( opt=None , models=() ) ¶ -
This function updates variable objects on the computational graph using obtained gradients.
If an optimizer instance is given, gradients are rescaled with regard to the optimization algorithm before updating.
Parameters: - opt ( Optimizer ) – Algorithm for rescaling gradients.
- models – List of models to update variables. When specified, variables which does not belong to one of the models are not updated.
Example
>>> import numpy as np >>> import renom as rm >>> a = rm.Variable(np.arange(4).reshape(2, 2)) >>> b = rm.Variable(np.arange(4).reshape(2, 2)) >>> print("Before", a) Before [[ 0. 1.] [ 2. 3.]] >>> out = rm.sum(2*a + 3*b) >>> grad = out.grad(models=(a, )) >>> print("Gradient", grad.get(a)) Gradient [[ 2. 2.] [ 2. 2.]] >>> grad.update() >>> print("Updated", a) Updated [[-2. -1.] [ 0. 1.]]
-
-
class
renom.core.
Node
( *args , **kwargs ) ¶ -
Bases:
numpy.ndarray
This is the base class of all operation function. Node class inherits numpy ndarray class.
Example
>>> import numpy as np >>> import renom as rm >>> vx = rm.Variable(np.random.rand(3, 2)) >>> isinstance(vx, rm.Node) True
-
to_cpu
( ) ¶ -
Send the data from GPU device to CPU.
-
to_gpu
( ) ¶ -
Send the data on CPU to GPU device. This method only available if cuda is activated otherwise this raises ValueError .
Example
>>> import numpy as np >>> import renom as rm >>> from renom.cuda import set_cuda_active >>> set_cuda_active(True) >>> a = rm.Variable(np.arange(4).reshape(2, 2)) >>> a.to_gpu() # Sending array to gpu device.
-
copy
( ) ¶ -
Returns a copy of itself. If node object does not have data on gpu, this returns ndarray.
Returns: Copy of node object. Return type: ( Node , ndarray)
-
as_ndarray
( ) ¶ -
This method returns itself as ndarray object.
-
release_gpu
( ) ¶ -
This method releases array data on GPU.
-
detach_graph
( ) ¶ -
This method destroys computational graph.
-
reshape
( *shape ) ¶ -
Returns reshaped array.
Parameters: shape ( list , int ) – Array will be reshaped according to given shape. Returns: Reshaped array. Return type: ( Node ) Example
>>> import numpy as np >>> import renom as rm >>> a = rm.Variable(np.arange(4).reshape(2, 2)) >>> print(a) [[ 0. 1.] [ 2. 3.]] >>> print(a.reshape(-1)) [ 0. 1. 2. 3.] >>> print(a.reshape(1, 4)) [[ 0. 1. 2. 3.]]
-
grad
( initial=None , detach_graph=True , weight_decay=None , **kwargs ) ¶ -
This method follows computational graph and returns the gradients of Variable object.
Parameters:
-
transpose
( *axis ) ¶ -
Returns an array with axes transposed.
Parameters: axes ( list of ints ) – Permute the axes according to the values given. Returns: Transposed array. Return type: ( Node ) Example
>>> import numpy as np >>> import renom as rm >>> a = rm.Variable(np.arange(4).reshape(2, 2)) >>> print(a) [[ 0. 1.] [ 2. 3.]] >>> print(a.transpose(1, 0)) [[ 0. 2.] [ 1. 3.]]
-
-
class
renom.core.
Variable
( *args , **kwargs ) ¶ -
Bases:
renom.core.basic_node.Node
Variable class.
The gradient of this object will be calculated. Variable object is created from ndarray object or Number object.
Parameters: Weight decay allows the user to choose if weight decay is to be used in any of their variables. If weight decay is not defined in the Variable (I.e. defaults to None), then no weight decay is performed.
For convenience, one can define a variable with a weight decay of 0 and provide the weight decay argument when building the gradients to default all weights to the same λ for weight decay.
Individually assigned weight decay takes precedence over this default value, allowing users to customize the weight decay in the network.
In summary, weight decay updates according to the following table.
Variable Grad Result None <Any> No Update 0.3 <Any> 0.3 0 None/0 No Update 0 0.3 0.3 Example
>>> import numpy as np >>> import renom as rm >>> x = np.array([1. -1]) >>> rm.Variable(x) Variable([ 1., -1.], dtype=float32)
-
class
renom.operation.
Abase
( *args , **kwargs ) ¶
-
class
renom.operation.
Amax
( *args , **kwargs ) ¶ -
This function performs max calculation.
Parameters: Example
>>> import numpy as np >>> import renom as rm >>> # Forward Calculation >>> a = np.arange(4).reshape(2, 2) >>> a [[0 1] [2 3]] >>> rm.amax(a, axis=1) [ 1. 3.] >>> >>> rm.amax(a, axis=0) [ 2. 3.] >>> rm.amax(a, axis=0, keepdims=True) [[ 2. 3.]] >>> >>> # Calculation of differentiation >>> va = rm.Variable(a) >>> out = rm.amax(va) >>> grad = out.grad() >>> grad.get(va) # Getting the gradient of 'va'. [[ 0., 0.], [ 0., 1.]]
-
class
renom.operation.
Amin
( *args , **kwargs ) ¶ -
This function performs min calculation.
Parameters: Example
>>> import numpy as np >>> import renom as rm >>> # Forward Calculation >>> a = np.arange(4).reshape(2, 2) >>> a [[0 1] [2 3]] >>> rm.amin(a, axis=1) [ 0. 2.] >>> >>> rm.amin(a, axis=0) [ 0. 1.] >>> rm.amin(a, axis=0, keepdims=True) [[ 0. 1.]] >>> >>> # Calculation of differentiation >>> va = rm.Variable(a) >>> out = rm.amin(va) >>> grad = out.grad() >>> grad.get(va) # Getting the gradient of 'va'. [[ 1., 0.], [ 0., 0.]]
-
renom.operation.
reshape
( array , shape ) ¶ -
This function reshapes array.
Parameters: Returns: Reshaped array.
Return type: ( Node )
Example
>>> import renom as rm >>> import numpy as np >>> x = rm.Variable(np.arange(6)) >>> x.shape (6,) >>> y = rm.reshape(x, (2, 3)) >>> y.shape (2, 3)
-
class
renom.operation.
sum
( *args , **kwargs ) ¶ -
This function sums up matrix elements. If the argument ‘axis’ is passed, this function performs sum along specified axis.
Parameters: Returns: Summed array.
Return type: ( Node )
Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> z = rm.sum(x) >>> z sum(3.21392822265625, dtype=float32)
-
class
renom.operation.
dot
( *args , **kwargs ) ¶ -
This function executes dot product of the two matrixes.
Parameters: Returns: Multiplied array.
Return type: ( Node )
Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> y = np.random.rand(2, 2) >>> z = rm.dot(y, x) >>> z dot([[ 0.10709135, 0.15022227, 0.12853521], [ 0.30557284, 0.32320538, 0.26753256]], dtype=float32)
-
class
renom.operation.
concat
( *args , **kwargs ) ¶ -
Join a sequence of arrays along specified axis.
Parameters: Returns: Concatenated array.
Return type: ( Node )
Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> y = np.random.rand(2, 2) >>> z = rm.concat(x, y) >>> z.shape (2, 5) >>> z concat([[ 0.56989014, 0.50372809, 0.40573129, 0.17601326, 0.07233092], [ 0.09377897, 0.8510806 , 0.78971916, 0.52481949, 0.06913455]], dtype=float32)
-
class
renom.operation.
where
( *args , **kwargs ) ¶ -
Return elements, either from a or b, depending on condition.
Parameters: Returns: Conditioned array.
Return type: ( Node )
Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> x array([[ 0.56989017, 0.50372811, 0.4057313 ], [ 0.09377897, 0.85108059, 0.78971919]]) >>> z = rm.where(x > 0.5, x, 0) >>> z where([[ 0.56989014, 0.50372809, 0. ], [ 0. , 0.8510806 , 0.78971916]], dtype=float32)
-
class
renom.operation.
sqrt
( *args , **kwargs ) ¶ -
Square root operation.
Parameters: arg ( Node , ndarray ) – Input array. Returns: Square root of input array. Return type: ( Node ) Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> x array([[ 0.56989017, 0.50372811, 0.4057313 ], [ 0.09377897, 0.85108059, 0.78971919]]) >>> z = rm.sqrt(x) >>> z sqrt([[ 0.75491071, 0.70973808, 0.6369704 ], [ 0.30623353, 0.92254031, 0.88866144]], dtype=float32)
-
class
renom.operation.
square
( *args , **kwargs ) ¶ -
Square operation.
Parameters: arg ( Node , ndarray ) – Input array. Returns: Squared array. Return type: ( Node )
-
class
renom.operation.
log
( *args , **kwargs ) ¶ -
Log operation.
Parameters: arg ( Node , ndarray ) – Input array. Returns: Logarithm of input array. Return type: ( Node )
-
class
renom.operation.
exp
( *args , **kwargs ) ¶ -
Exponential operation.
Parameters: arg ( Node , ndarray ) – Input array. Returns: Exponential of input array. Return type: ( Node )
-
class
renom.operation.
amin
( *args , **kwargs ) ¶ -
Returns min value or array of given array. You can specify the axis which the operation will be performed for.
Parameters: Example
>>> import numpy as np >>> import renom as rm >>> # Forward Calculation >>> a = np.arange(4).reshape(2, 2) >>> a [[0 1] [2 3]] >>> rm.amin(a, axis=1) [ 0. 2.] >>> >>> rm.amin(a, axis=0) [ 0. 1.] >>> rm.amin(a, axis=0, keepdims=True) [[ 0. 1.]] >>> >>> # Calculation of differentiation >>> va = rm.Variable(a) >>> out = rm.amin(va) >>> grad = out.grad() >>> grad.get(va) # Getting the gradient of 'va'. [[ 1., 0.], [ 0., 0.]]
-
class
renom.operation.
amax
( *args , **kwargs ) ¶ -
Returns max value or array of given array. You can specify the axis which the operation will be performed for.
Parameters: Example
>>> import numpy as np >>> import renom as rm >>> # Forward Calculation >>> a = np.arange(4).reshape(2, 2) >>> a [[0 1] [2 3]] >>> rm.amax(a, axis=1) [ 1. 3.] >>> >>> rm.amax(a, axis=0) [ 2. 3.] >>> rm.amax(a, axis=0, keepdims=True) [[ 2. 3.]] >>> >>> # Calculation of differentiation >>> va = rm.Variable(a) >>> out = rm.amax(va) >>> grad = out.grad() >>> grad.get(va) # Getting the gradient of 'va'. [[ 0., 0.], [ 0., 1.]]
-
class
renom.operation.
mean
( *args , **kwargs ) ¶ -
This function calculates the mean of matrix elements. If the argument ‘axis’ is passed, this function performs mean calculation along the specified axis.
Parameters: Returns: Mean array.
Return type: ( Node )
Example
>>> import numpy as np >>> import renom as rm >>> >>> x = np.random.rand(2, 3) >>> z = rm.mean(x) >>> z