renom.utility

renom.utility.initializer

class renom.utility.initializer. Initializer ( gain=1.0 )

Base class of initializer.

When the initialization of parameterized layer class, dense, conv2d, lstm … , you can select the initialization method changing the initializer class as following example.

Example

>>> import renom as rm
>>> from renom.utility.initializer import GlorotUniform
>>>
>>> layer = rm.Dense(output_size=2, input_size=2, initializer=GlorotUniform())
>>> print(layer.params.w)
[[-0.55490332 -0.14323548]
 [ 0.00059367 -0.28777076]]
class renom.utility.initializer. Constant ( value )
class renom.utility.initializer. GlorotUniform ( gain=1.0 )

Glorot uniform initializer [GlorotRef] initializes parameters sampled by following uniform distribution “U(max, min)”.

\begin{split}&U(max, min) \\ &max = sqrt(6/(input\_size + output\_size)) \\ &min = -sqrt(6/(input\_size + output\_size))\end{split}
class renom.utility.initializer. GlorotNormal ( gain=1.0 )

Glorot normal initializer [GlorotRef] initializes parameters sampled by following normal distribution “N(0, std)”.

\begin{split}&N(0, std) \\ &std = sqrt(2/(input\_size + output\_size)) \\\end{split}
[GlorotRef] ( 1 , 2 ) Xavier Glorot, Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks.
class renom.utility.initializer. HeNormal ( gain=1.0 )

He normal initializer. Initializes parameters according to [HeNormRef]

\begin{split}&N(0, std) \\ &std = sqrt(2/(input\_size)) \\\end{split}
[HeNormRef] https://arxiv.org/abs/1502.01852 Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
class renom.utility.initializer. HeUniform ( gain=1.0 )

He uniform initializer. Initializes parameters according to [HeUniformRef]

\begin{split}&U(max, min) \\ &max = sqrt(6/(input\_size)) \\ &min = -sqrt(6/(input\_size))\end{split}
[HeUniformRef] https://arxiv.org/abs/1502.01852 Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
class renom.utility.initializer. Gaussian ( mean=0.0 , std=0.1 , gain=1.0 )

Gaussian initializer. Initialize parameters using samples drawn from N(mean, std)

Parameters:
  • mean ( float ) – Mean value of normal distribution.
  • std ( float ) – Standard deviation value of normal distribution.
class renom.utility.initializer. Uniform ( min=-1.0 , max=1.0 , gain=1.0 )

Uniform initializer. Initialize parameters using samples drawn from U(min, max)

Parameters:
  • min ( float ) – Minimum limit of uniform distribution.
  • max ( float ) – Maximum limit of uniform distribution.
class renom.utility.initializer. Orthogonal ( gain=1.0 )

Orthogonal initializer. Initialize parameters using orthogonal initialization.

[1] Andrew M. Saxe, James L. McClelland, Surya Ganguli https://arxiv.org/abs/1312.6120 Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

renom.utility.searcher

class renom.utility.searcher. Searcher ( parameters )

Base class of searcher.

Searcher classes searches the hyper parameter that yields the lowest value.

Parameters: parameters ( dict ) – Dictionary which contains the parameter name as a key and each parameter space as a value.

Example

>>> import renom as rm
>>> from renom.utility.searchera import GridSearcher
>>> params = {
...     "p1":[1, 2, 3],
...     "p2":[4, 5, 6],
... }
...
>>> searcher = GridSearcher(params)
>>>
>>> for p in searcher.suggest():
...     searcher.set_result(p["p1"] + p["p2"])
...
>>> bests = searcher.best()
>>> for i in range(len(bests)):
... print("{}: parameter {} value {}".format(i+1, bests[i][0], bests[i][1]))
...
1: parameter {'p2': 4, 'p1': 1} value 5
2: parameter {'p2': 4, 'p1': 2} value 6
3: parameter {'p2': 5, 'p1': 1} value 6
set_result ( result , params=None )

Set the result of yielded hyper parameter to searcher object.

Parameters:
  • result ( float ) – The result of yielded hyper parameter.
  • params ( dict ) – The hyper parameter which used in model. If None has given, the result is considered as it caused by last yielded hyper parameter.
suggest ( max_iter )

This method yields next hyper parameter.

Parameters: max_iter ( int ) – Maximum iteration number of parameter search.
Yields: dict – Dictionary of hyper parameter.
best ( num=3 )

Returns the best hyper parameters. By default, this method returns the top 3 hyper parameter as a result of searching.

Parameters: num ( int ) – The number of hyper parameters.
Returns: A list of dictionary of hyper parameters.
Return type: list
class renom.utility.searcher. GridSearcher ( parameters )

Grid searcher class.

This class searches better hyper parameter in the parameter space with grid search.

Parameters: parameters ( dict ) – Dictionary witch contains the parameter name as a key and each parameter space as a value.
suggest ( )

This method yields next hyper parameter.

Parameters: max_iter ( int ) – Maximum iteration number of parameter search.
Yields: dict – Dictionary of hyper parameter.
class renom.utility.searcher. RandomSearcher ( parameters )

Random searcher class.

This class randomly searches a parameter of the model which yields the lowest loss.

Parameters: parameters ( dict ) – Dictionary which contains the parameter name as a key and each parameter space as a value.
suggest ( max_iter=10 )

This method yields next hyper parameter.

Parameters: max_iter ( int ) – Maximum iteration number of parameter search.
Yields: dict – Dictionary of hyper parameter.
class renom.utility.searcher. BayesSearcher ( parameters )

Bayes searcher class.

This class performs hyper parameter search based on bayesian optimization.

Parameters: parameters ( dict ) – Dictionary which contains the parameter name as a key and each parameter space as a value.

Note

This class requires the module GPy [1]_ . You can install it using pip. pip install gpy

[1] GPy - Gaussian Process framework http://sheffieldml.github.io/GPy/
suggest ( max_iter=10 , random_iter=3 )
Parameters:
  • max_iter ( int ) – Maximum iteration number of parameter search.
  • random_iter ( int ) – Number of random search.

renom.utility.trainer

class renom.utility.trainer. Trainer ( model , num_epoch , loss_func , batch_size , optimizer=None , shuffle=True , events=None , num_gpu=1 , regularization=None )

Trainer class.

This class owns train loop. It executes forward propagation, back propagation and updating of weight parameters for the specified number of times.

Parameters:
  • model ( Model ) – Model to be trained.
  • num_epoch ( int ) – Numer of iteration.
  • loss_func ( Node ) – Loss function.
  • batch_size ( int ) – Batch size.
  • optimizer ( Optimizer ) – Gradient descent algorithm.
  • shuffle ( bool ) – If it’s true, mini batch is created randomly.
  • events ( dict ) – Dictionary of function.

Example

>>> import numpy as np
>>> import renom as rm
>>> from renom.utility.trainer import Trainer
>>> from renom.utility.distributor import NdarrayDistributor
>>> x = np.random.rand(300, 50)
>>> y = np.random.rand(300, 1)
>>> model = rm.Dense(1)
>>> trainer = Trainer(model, 10, rm.mean_squared_error, 3, rm.Sgd(0.1))
>>> trainer.train(NdarrayDistributor(x, y))
epoch  0: avg loss 0.1597: 100%|██████████| 100/100.0 [00:00<00:00, 1167.85it/s]
epoch  1: avg loss 0.1131: 100%|██████████| 100/100.0 [00:00<00:00, 1439.25it/s]
epoch  2: avg loss 0.1053: 100%|██████████| 100/100.0 [00:00<00:00, 1413.42it/s]
epoch  3: avg loss 0.0965: 100%|██████████| 100/100.0 [00:00<00:00, 1388.67it/s]
epoch  4: avg loss 0.0812: 100%|██████████| 100/100.0 [00:00<00:00, 1445.61it/s]
epoch  5: avg loss 0.0937: 100%|██████████| 100/100.0 [00:00<00:00, 1432.99it/s]
epoch  6: avg loss 0.0891: 100%|██████████| 100/100.0 [00:00<00:00, 1454.68it/s]
epoch  7: avg loss 0.0992: 100%|██████████| 100/100.0 [00:00<00:00, 1405.73it/s]
epoch  8: avg loss 0.0933: 100%|██████████| 100/100.0 [00:00<00:00, 1401.55it/s]
epoch  9: avg loss 0.1090: 100%|██████████| 100/100.0 [00:00<00:00, 1343.97it/s]
train ( train_distributor , test_distributor=None )

Train method. This method executes train loop. If test_distributor is given, validation loss will be calculated.

Parameters:
  • train_distributor ( Distributor ) – Distributor for yielding train data.
  • test_distributor ( Distributor ) – Distributor for yielding test data.
test ( data )

Test method. This method executes forward propagation for given data.

Parameters: data ( ndarray ) – Input data.
Returns: ndarray

renom.utility.gradient_clipping

class renom.utility.gradient_clipping. GradientClipping ( threshold=0.5 , norm=2 )

This class is used to clip gradient.

The calculation is dones as shown below:

\begin{split}\begin{gather} \hat { g } \leftarrow \frac { \partial L }{ \partial \omega } \\ \text{ if } || \hat { g } ||_n \geq {\it threshold } \hspace{5pt} { \bf then } \\ \hat { g } \leftarrow \frac { threshold } { || \hat { g } ||_n } \hat { g } \\ \end{gather}\end{split}
  • L : Loss
  • \omega : weight
  • n : norm
Parameters:
  • threshold ( float ) – If gradient norm is over this threshold, normalization starts.
  • norm ( int ) – Norm value.
Returns:

total gradient norm.

Examples

>>> from **** import GradientClipping
>>> grad_clip = GradientClipping(threshold=0.5,norm=2)
>>>
>>> grad = loss.grad()
>>> grad_clip(grad)
>>>
>>> grad.update(Sgd(lr=0.01))

References

Razvan Pascanu, Tomas Mikolov, Yoshua Bengio
On the difficulty of training Recurrent Neural Networks

renom.utility.distributor.distributor

class renom.utility.distributor.distributor. Distributor ( x=None , y=None , path=None , data_table=None )

Distributor class This is the base class of a data distributor.

Parameters:
  • x ( ndarray ) – Input data.
  • y ( ndarray ) – Target data.
  • path ( string ) – Path to data.
>>> import numpy as np
>>> from renom.utility.distributor.distributor import NdarrayDistributor
>>> x = np.random.randn(100, 100)
>>> y = np.random.randn(100, 1)
>>> distributor = NdarrayDistributor(x, y)
>>> batch_x, batch_y = distributor.batch(10).next()
>>> batch_x.shape
(10, 100)
>>> batch_y.shape
(10, 1)
batch ( batch_size , shuffle=True , steps=None )

This function returns minibatch .

Parameters:
  • batch_size ( int ) – Size of batch.
  • shuffle ( bool ) – If True is passed, data will be selected randomly.
split ( ratio=0.8 , shuffle=True )

This method splits its own data and generates 2 distributors using the split data.

Parameters:
  • ratio ( float ) – Ratio for dividing data.
  • shuffle ( bool ) – If True, the data is shuffled before dividing.
class renom.utility.distributor.distributor. NdarrayDistributor ( x , y , **kwargs )

Derived class of Distributor which manages ndarray data.

Parameters:
  • x ( ndarray ) – Input data.
  • y ( ndarray ) – Target data.
class renom.utility.distributor.distributor. GPUDistributor ( x , y , **kwargs )

Derived class of Distributor which manages GPUValue data.

Parameters:
  • x ( ndarray ) – Input data.
  • y ( ndarray ) – Target data.
batch ( batch_size , shuffle=True , steps=None )

This function returns minibatch .

Parameters:
  • batch_size ( int ) – Size of batch.
  • shuffle ( bool ) – If True is passed, data will be selected randomly.

renom.utility.distributor.imageloader

class renom.utility.distributor.imageloader. ImageLoader ( batches , color='RGB' )

ImageLoader is a generator that yields images in batches. By inputting list of image path, ImageLoader load images and yields according to number of batch size.

Parameters:
  • batches ( list ) – List of image path.
  • color ( str ) – Color Space of Input Image.

Example

>>> batches = [[('/data/file1.jpg', '/data/file2.jpg')], [('/data/file3.jpg', '/data/file4.jpg')] ]
>>> loader = ImageLoader(batches)
>>> for i, (x) in enumerate(dist.batch(2)):
...    print 'Batch', i

renom.utility.distributor.threadingdistributor

class renom.utility.distributor.threadingdistributor. ImageDistributor ( image_path_list , y_list=None , class_list=None , imsize=(32 , 32) , color='RGB' , augmentation=None )

Base class for image distribution. Use subclasses ImageClassificationDistributor, ImageDetectionDistributor, ImageSegmentationDistributor depending on the image task. Or sublass it for original image tasks.

Parameters:
  • image_path_list ( list ) – List of image path.
  • y_list ( list ) – List of labels (bbox and class) for every image (2 dimensional array).
  • class_list ( list ) – List of classes name for this dataset.
  • shuffle ( bool ) – If True, apply datasets shuffle per epoch
  • imsize ( tuple ) – Resize input image for converting batch ndarray.
  • color ( str ) – Color of Input Image. [“RGB”, “GRAY”]
  • augmentation ( function ) – Augmentater for input Image.
class renom.utility.distributor.threadingdistributor. ImageDetectionDistributor ( image_path_list , y_list=None , class_list=None , imsize=(360 , 360) , color='RGB' , augmentation=None )

Distributor class for tasks of image detection. Labels are expected to be Bounding boxes and Classes. ex:) np.array([[center x, center y, x_top_left, height, 0, 0, 0, 1, 0]])

Parameters:
  • image_path_list ( list ) – list of image path
  • y_list ( list ) – list of labels (bbox and class) for every image
  • class_list ( list ) – list of classes name for this dataset
  • shuffle ( bool ) – If True, apply datasets shuffle per epoch
  • imsize ( tuple ) – resize input image for converting batch ndarray
  • color ( str ) – color of Input Image. [“RGB”, “GRAY”]
  • augmentation ( function ) – augmentater for Input Image
Example:
>>> from renom.utility.load.imageloader.threadingdistributor import ImageDetectionDistributor
>>> from renom.utility.image.data_augmentation import *
>>> datagenerator = DataAugmentation([
...     Flip(1),
...     Rotate(20),
...     Crop(size=(300, 300)),
...     Resize(size=(500, 500)),
...     Shift((20, 50)),
...     Color_jitter(v = (0.5, 2.0)),
...     Zoom(zoom_rate=(1.2, 2))
...     # Rescale(option='zero'),
... ], random = True)
>>> dist = ImageDetectionDistributor(x_list, y_list=y_list,
                                class_list=class_list,callback=datagenerator,
                                shuffle=True, imsize=(360, 360), color='RGB')
>>> for i, (x, y) in enumerate(dist.batch(32)):
...     print 'Batch', i
batch ( batch_size , shuffle )

Returns generator of batch images.

Parameters: batch_size ( int ) – size of a batch.
Returns:
Images(4 dimension) of input data for Network.
If including labels, return with transformed labels
Return type: (ndarray)
class renom.utility.distributor.threadingdistributor. ImageClassificationDistributor ( image_path_list , y_list=None , class_list=None , imsize=(360 , 360) , color='RGB' , augmentation=None )

Distributor class for tasks of image classification.

Parameters:
  • image_path_list ( list ) – list of image path
  • y_list ( list ) – list of labels (bbox and class) for every image
  • class_list ( list ) – list of classes name for this dataset
  • shuffle ( bool ) – If True, apply datasets shuffle per epoch
  • imsize ( tuple ) – resize input image for converting batch ndarray
  • color ( str ) – color of Input Image. [“RGB”, “GRAY”]
  • augmentation – (function) augmentater for Input Image

Example

>>> from renom.utility.load.imageloader.threadingdistributor import ImageClassificationDistributor
>>> from renom.utility.image.data_augmentation import *
>>> datagenerator = DataAugmentation([
...     Flip(1),
...     Rotate(20),
...     Crop(size=(300, 300)),
...     Resize(size=(500, 500)),
...     Shift((20, 50)),
...     Color_jitter(v = (0.5, 2.0)),
...     Zoom(zoom_rate=(1.2, 2))
...     # Rescale(option='zero'),
... ], random = True)
>>> dist = ImageClassificationDistributor(x_list, y_list=y_list,
                                        class_list=class_list, callback=datagenerator,
                                        shuffle=True, imsize=(360, 360), color='RGB')
>>> for i, (x, y) in enumerate(dist.batch(32)):
...     print 'Batch', i
batch ( batch_size , shuffle )
Parameters: batch_size ( int ) – size of a batch.
Returns: Images(4 dimension) of input data for Network. If including labels, return with original labels
Return type: (ndarray)

renom.utility.image.data_augmentation.augmentation

class renom.utility.image.data_augmentation.augmentation. DataAugmentation ( converter_list , random=False )

Apply transformation to the input x and labels. You could choose transform function from below. [“Flip”, “Resize”, “Crop”, “Color_jitter”, “Rescale”, “Rotate”, “Shift”].

Parameters:
  • converter_list ( list ) – list of instance for converter.
  • random ( bool ) – apply random transformation or not
create ( x , labels=None , num_class=0 )

Performs a DataAugmentation of a Numpy images. if x is a Batch, apply DataAugmentation to Batch. if arguments include labels, apply label transformation.

Parameters:
  • x ( ndarray ) – 3 or 4(batch) dimensional images. dtype is float32. value=[0.0, 255.0].
  • labels ( ndarray ) – labels for classification, detection and segmentation. 2-dimensional array
  • num_class ( int ) – number of class of datasets
Returns:

Images(4 dimension) of augment transformed. If including labels, return with transformed labels

Return type:

(ndarray)

Example

>>> import matplotlib.pyplot as plt
>>> from PIL import Image as im
>>> from renom.utility.image.data_augmentation import *
>>> image = im.open("/Users/tsujiyuuki/env_python/code/my_code/Utilities/doc/img_autodoc/2007_000027.jpg")
>>> image = np.array(image, dtype=np.float32)
>>> datagenerator = DataAugmentation([
...     Flip(1),
...     Rotate(20),
...     Crop(size=(300, 300)),
...     Resize(size=(500, 500)),
...     Shift((20, 50)),
...     Color_jitter(v = (0.5, 2.0)),
...     Zoom(zoom_rate=(1.2, 2))
...     # Rescale(option='zero'),
... ], random = True)
>>> augment_image = datagenerator.create(image)
>>> fig, axes = plt.subplots(2, 1)
>>> axes[0].imshow(image/255); axes[0].set_title("Original Image")
>>> axes[1].imshow(augment_image[0] / 255); axes[1].set_title("Shift One Image")
>>> plt.show()