Class Regularization

java.lang.Object
mklab.JGNN.nn.optimizers.Regularization
All Implemented Interfaces:
Optimizer

public class Regularization extends Object implements Optimizer
Wraps an Optimizer by applying the derivative of L2 loss on every tensor during Optimizer.update(Tensor, Tensor).
Author:
Emmanouil Krasanakis
  • Field Details

    • regularization

      protected double regularization
  • Constructor Details

    • Regularization

      public Regularization(Optimizer baseOptimizer, double regularization)
      Initializes a Regularization.
      Parameters:
      baseOptimizer - The base optimizer on which to apply regularization.
      regularization - The weight of the regularization.
    • Regularization

      protected Regularization()
  • Method Details

    • update

      public void update(Tensor value, Tensor gradient)
      Description copied from interface: Optimizer
      In-place updates the value of a tensor given its gradient. Some optimizers (e.g. Adama) require the exact same tensor instance to be provided so as to keep track of its optimization progress. The library makes sure to keep this constraint.
      Specified by:
      update in interface Optimizer
      Parameters:
      value - The tensor to update.
      gradient - The tensor's gradient.
    • reset

      public void reset()
      Description copied from interface: Optimizer
      Resets (and lets the garbage collector free) optimizer memory. Should be called at the beginning of training (not after each epoch).
      Specified by:
      reset in interface Optimizer