Package com.imsl.datamining.neural
Class LeastSquaresTrainer
java.lang.Object
com.imsl.datamining.neural.LeastSquaresTrainer
- All Implemented Interfaces:
Trainer,Serializable
Trains a
FeedForwardNetwork using a Levenberg-Marquardt
algorithm for minimizing a sum of squares error.- See Also:
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprotected Objectclone()Clones a copy of the trainer.double[]Returns the value of the gradient of the error function with respect to the weights.intReturns the error status from the trainer.doubleReturns the final value of the error function.protected voidsetEpochNumber(int num) Sets the epoch number for the trainer.voidsetFalseConvergenceTolerance(double falseConvergenceTolerance) Set the false convergence tolerance.voidsetGradientTolerance(double gradientTolerance) Set the gradient tolerance.voidsetInitialTrustRegion(double initialTrustRegion) Sets the intial trust region.voidsetMaximumStepsize(double maximumStepsize) Sets the maximum step size.voidsetMaximumTrainingIterations(int maximumSolverIterations) Sets the maximum number of iterations used by the nonlinear least squares solver.protected voidsetParallelMode(ArrayList[] allLogRecords) Sets the trainer to be used in multi-threaded EpochTainer.voidsetRelativeTolerance(double relativeTolerance) Sets the relative tolerance.voidsetStepTolerance(double stepTolerance) Set the step tolerance used to step between weights.voidTrains the neural network using supplied training patterns.
-
Constructor Details
-
LeastSquaresTrainer
public LeastSquaresTrainer()Creates aLeastSquaresTrainer.
-
-
Method Details
-
clone
Clones a copy of the trainer. -
setParallelMode
Sets the trainer to be used in multi-threaded EpochTainer.- Parameters:
allLogRecords- AnArrayListarray containing the log records.
-
setEpochNumber
protected void setEpochNumber(int num) Sets the epoch number for the trainer.- Parameters:
num- Anintarray containing the epoch number.
-
setMaximumStepsize
public void setMaximumStepsize(double maximumStepsize) Sets the maximum step size.- Parameters:
maximumStepsize- A nonnegativedoublevalue specifying the maximum allowable stepsize in the optimizer. Default: \(10^3 || w ||_2\), where w are the values of the weights in the network when training starts.- See Also:
-
setInitialTrustRegion
public void setInitialTrustRegion(double initialTrustRegion) Sets the intial trust region.- Parameters:
initialTrustRegion- Adoublewhich specifies the initial trust region radius. Default: unlimited trust region.- See Also:
-
setMaximumTrainingIterations
public void setMaximumTrainingIterations(int maximumSolverIterations) Sets the maximum number of iterations used by the nonlinear least squares solver.- Parameters:
maximumSolverIterations- Anintwhich specifies the maximum number of iterations to be used by the nonlinear least squares solver. Its default value is 1000.- See Also:
-
setRelativeTolerance
public void setRelativeTolerance(double relativeTolerance) Sets the relative tolerance.- Parameters:
relativeTolerance- Adoublewhich specifies the relative error tolerance. It must be in the interval [0,1]. Its default value is 1.0e-20.- See Also:
-
setFalseConvergenceTolerance
public void setFalseConvergenceTolerance(double falseConvergenceTolerance) Set the false convergence tolerance.- Parameters:
falseConvergenceTolerance- adoublespecifying the false convergence tolerance. Default: 1.0e-14.- See Also:
-
setGradientTolerance
public void setGradientTolerance(double gradientTolerance) Set the gradient tolerance.- Parameters:
gradientTolerance- Adoublespecifying the gradient tolerance. Default: 2.0e-5.- See Also:
-
setStepTolerance
public void setStepTolerance(double stepTolerance) Set the step tolerance used to step between weights.- Parameters:
stepTolerance- Adoublewhich specifies the scaled step tolerance to use when changing the weights. Default: 1.0e-5.- See Also:
-
train
Trains the neural network using supplied training patterns.Each row of
xDataandyDatacontains a training pattern. These number of rows in two arrays must be equal.- Specified by:
trainin interfaceTrainer- Parameters:
network- TheNetworkto be trained.xData- Adoublematrix which contains the input training patterns. The number of columns inxDatamust equal the number ofNodes in theInputLayer.yData- Adoublematrix which contains the output training patterns. The number of columns inyDatamust equal the number ofPerceptrons in theOutputLayer.
-
getErrorValue
public double getErrorValue()Returns the final value of the error function.- Specified by:
getErrorValuein interfaceTrainer- Returns:
- A
doublecontaining the final value of the error function from the last training. Before training,NaNis returned.
-
getErrorGradient
public double[] getErrorGradient()Returns the value of the gradient of the error function with respect to the weights.- Specified by:
getErrorGradientin interfaceTrainer- Returns:
- A
doublearray whose length is equal to the number of network weights, containing the value of the gradient of the error function with respect to the weights. Before training,nullis returned.
-
getErrorStatus
public int getErrorStatus()Returns the error status from the trainer.- Specified by:
getErrorStatusin interfaceTrainer- Returns:
- An
intwhich contains the error status. Zero indicates that no errors were encountered during training. Any non-zero value indicates that some error condition arose during training.In many cases the trainer is able to recover from these conditions and produce a well-trained network.
Value Meaning 0 All convergence tests were met. 1 Scaled step tolerance was satisfied. The current point may be an approximate local solution, or the algorithm is making very slow progress and is not near a solution, or StepToleranceis too big.2 Scaled actual and predicted reductions in the function are less than or equal to the relative function convergence tolerance RelativeTolerance.3 Iterates appear to be converging to a noncritical point. Incorrect gradient information, a discontinuous function, or stopping tolerances being too tight may be the cause. 4 Five consecutive steps with the maximum stepsize have been taken. Either the function is unbounded below, or has a finite asymptote in some direction, or the maximum stepsize is too small. 5 Too many iterations required
-