Package mklab.JGNN.nn.optimizers
Class Regularization
java.lang.Object
mklab.JGNN.nn.optimizers.Regularization
- All Implemented Interfaces:
Optimizer
Wraps an
Optimizer
by applying the derivative of L2 loss on every
tensor during Optimizer.update(Tensor, Tensor)
.- Author:
- Emmanouil Krasanakis
-
Field Summary
-
Constructor Summary
ModifierConstructorDescriptionprotected
Regularization
(Optimizer baseOptimizer, double regularization) Initializes aRegularization
. -
Method Summary
-
Field Details
-
regularization
protected double regularization
-
-
Constructor Details
-
Regularization
Initializes aRegularization
.- Parameters:
baseOptimizer
- The base optimizer on which to apply regularization.regularization
- The weight of the regularization.
-
Regularization
protected Regularization()
-
-
Method Details
-
update
Description copied from interface:Optimizer
In-place updates the value of a tensor given its gradient. Some optimizers (e.g. Adama) require the exact same tensor instance to be provided so as to keep track of its optimization progress. The library makes sure to keep this constraint. -
reset
public void reset()Description copied from interface:Optimizer
Resets (and lets the garbage collector free) optimizer memory. Should be called at the beginning of training (not after each epoch).
-