All Classes and Interfaces

Class
Description
Accesses a column of a Matrix as if it were a dense Tensor.
Accesses a row of a Matrix as if it were a dense Tensor.
Wraps a base Tensor by traversing only its elements in a specified range (from begin, up to end-1).
Implements an accuracy Loss of row-by-row comparisons.
Thic class implements an Adam Optimizer as explained in the paper: Kingma, Diederik P., and Jimmy Ba.
Implements a NNOperation that adds its two inputs.
Extends the ModelTraining class to be able to train Model instances for attributed graph functions (AGFs).
Demonstrates classification with an APPNP GNN.
Implements a NNOperation that creates a version of adjacency matrices with column-wise attention involving neighbor similarity.
 
Wraps an Optimizer by accumulating derivatives and calling Optimizer.update(Tensor, Tensor) with the average derivative after a fixed number of accumulations.
 
Implements a binary cross-entropy Loss.
For more than one output dimensions use CategoricalCrossEntropy
Implements a categorical cross-entropy Loss.
For binary classification of one output use BinaryCrossEntropy.
Downloads and constructs the Citeseer node classification Dataset.
Defines a matrix whose columns are all a copy of a Tensor.
Implements a NNOperation that performs the operation 1-x for its simple input x.
Implements a NNOperation that concatenates its two matrix inputs.
Implements a NNOperation that holds a constant tensor.
Downloads and constructs the Cora node classification Dataset.
This class provides the backbone with which to define datasets.
Implements a dense Matrix where all elements are stored in memory.
This class provides a dense Tensor that wraps an array of doubles.
Implements a square matrix whose diagonal elements are determined by the correspond values of an underlying tensor and off-diagonal elements are zero.
This interface abstracts a probability distribution that can be passed to Tensor.setToRandom(Distribution) for random tensor initialization.
Implements a NNOperation that converts its first argument to a ColumnRepetition matrix with a number of columns equal to the second argument.
A Matrix without data that contains only the correct dimension names and sizes.
A Tensor without data that contains only the correct dimension names and sizes.
Implements a NNOperation that performs an element-by-element exponential transformation of its one input tensor.
Extends the capabilities of LayeredBuilder to use for node classification.
 
Implements a NNOperation that lists the first element of the 2D matrix element iterator.
 
Implements a NNOperation that performs the equivalent of TensorFlow's gather operation.
Demonstrates classification with the GCN architecture.
 
Implements a gradient descent Optimizer.
 
Converts back-and-forth between objects and unique ids.
Implements a NNOperation that just transfers its single input.
This class defines an abstract interface for applying initializers to models.
Implements a NNOperation that performs a L1 transformation of its one input tensor by row or by column.
Extends the capabilities of the ModelBuilder with the ability to define multilayer (e.g.
This implementation covers code of the Learning tutorial.
Implements a NNOperation that outputs the natural logarithm of its single input.
Demonstrates classification with logistic regression.
Provides computation and (partial) derivation of popular activation functions and cross-entropy loss functions.
This class provides an abstract implementation of loss functions to be used during Model training.
 
Implements a NNOperation that performs a leaky relu operation, where the first argument is a tensor on which it is applied and the second one should be a tensor wrapping a double value (consider initializing this with as a Constant holding a tensor generated with Tensor.fromDouble(double)) where the wrapped value indicates the negative region's slope.
 
 
Implements a NNOperation that multiplies its two matrix inputs.
This class provides an abstract implementation of Matrix functionalities.
 
Implements a NNOperation that performs row-wise or column-wise maximum reduction on vector tensors or matrices.
Implements a NNOperation that performs row-wise or column-wise mean reduction on vector tensors or matrices.
 
A memory management system for thread-safe allocation and release of arrays of doubles.
 
Demonstrates classification with a message passing architecture.
 
Demonstrates classification with a two-layer perceptron
This class is a way to organize NNOperation trees into trainable machine learning models.
This class and subclasses can be used to create Model instances by automatically creating and managing NNOperation instances based on textual descriptions.
Demonstrates model builder internal node access that allows training with a symbolically defined loss function.
This is a helper class that automates the definition of training processes of Model instances by defining the number of epochs, loss functions, number of batches and the ability to use ThreadPool for parallelized batch computations.
Implements a NNOperation that multiplies its two inputs element-by-element.
Extends the base ModelBuilder with the full capabilities of the Neuralang scripting language.
Implements a NNOperation that performs an exponential transformation of its single input, but only on the non-zero elements.
This implementation covers code of the Neural Networks tutorial.
This class defines an abstract neural network operation with forward and backpropagation capabilities.
 
Implements a Normal Distribution of given mean and standard deviation.
Provides an interface for training tensors.
Implements a NNOperation that holds and returns a parameter tensor.
 
Downloads and constructs the Pubmed node classification Dataset.
Demonstrates classification with the GCN architecture.
Implements an iterator that traverses a range [min, max) where the right side is non-inclusive.
Implements an iterator that traverses a two-dimensional range (min, max) x (min2, max2).
 
Wraps an Optimizer by applying the derivative of L2 loss on every tensor during Optimizer.update(Tensor, Tensor).
Implements a NNOperation that performs a relu transformation of its one input tensor.
Implements a NNOperation that converts its first argument to a ColumnRepetition matrix with a number of columns equal to the second argument.
Implements a Matrix whose elements are all equals.
This class provides Tensor whose elements are all equal.
Implements a NNOperation that reshapes a matrix.
Defines a matrix whose rows are all a copy of a Tensor.
Extends the ModelTraining class to train Model instances from feature and label matrices.
Demonstrates classification with an architecture defined through the scripting engine.
Implements a NNOperation that performs a sigmoid transformation of its single input.
Demonstrates custom initialization of parameters.
This class provices an interface with which to define data slices, for instance to sample labels.
Implements a NNOperation that performs row-wise or column-wise softmax on vector tensors or matrices.
 
 
 
 
A sparse Matrix that allocates memory only for non-zero elements.
Deprecated.
Under development.
This class provides a sparse Tensor with many zero elements.
Implements a NNOperation that performs row-wise or column-wise sum reduction on vector tensors or matrices.
Implements a NNOperation that performs a tanh transformation of its single input.
This class provides a native java implementation of Tensor functionalities.
 
 
This class provides thread execution pool utilities while keeping track of thread identifiers for use by thread-specific NNOperation.
Implements a NNOperation that lists the second element of the 2D matrix element iterator.
This class generates trajectory graph labels.
Implements a NNOperation that performs matrix transposition.
Generates a transposed version of a base matrix, with which it shares elements.
Implements a Uniform Distribution of given bounds.
Implements a NNOperation that represents Model inputs.
This class describes a broad class of Initializer strategies, in which dense neural layer initialization is controlled so that variance is mostly preserved from inputs to outputs to avoid vanishing or exploding gradients in the first training runs.
Implements a dense Matrix where all elements are stored in memory.
This class provides a dense Tensor that wraps an array of doubles.
Implements a Loss that wraps other losses and outputs their value during training to an output stream (to System.out by default).
Wraps a list of tensors into a matrix with the tensors as columns.
Wraps a list of tensors into a matrix with the tensors as rows.