Namespace:
Imsl.Stat
Assembly:
ImslCS (in ImslCS.dll) Version: 6.5.0.0
Syntax
C# |
---|
[SerializableAttribute] public class NonlinearRegression |
Visual Basic (Declaration) |
---|
<SerializableAttribute> _ Public Class NonlinearRegression |
Visual C++ |
---|
[SerializableAttribute] public ref class NonlinearRegression |
Remarks
The nonlinear regression model is

where the observed values of the constitute the
responses or values of the dependent variable, the known
are vectors of values of the independent (explanatory)
variables,
is the vector of
regression parameters, and the
are independently distributed normal errors each with mean
zero and variance
. For this model, a least
squares estimate of
is also a maximum
likelihood estimate of
.
The residuals for the model are


![\sum\limits_{i=1}^n[e_i(\theta)]^2](eqn/eqn_3493.png)


NonlinearRegression is based on MINPACK routines LMDIF and
LMDER by More' et al. (1980). NonlinearRegression uses a
modified Levenberg-Marquardt method to generate a sequence of
approximations to the solution. Let be
the current estimate of
. A new estimate is
given by





The algorithm uses a "trust region" approach with a step bound of
. A solution of the equations is first
obtained for
. If
, this update is accepted; otherwise,
is set to a positive value and another solution is obtained.
The method is discussed by Levenberg (1944), Marquardt (1963), and
Dennis and Schnabel (1983, pages 129 - 147, 218 - 338).
Forward finite differences are used to estimate the Jacobian numerically unless the user supplied function computes the derivatives. In this case the Jacobian is computed analytically via the user-supplied function.
NonlinearRegression does not actually store the Jacobian but uses fast Givens transformations to construct an orthogonal reduction of the Jacobian to upper triangular form. The reduction is based on fast Givens transformations (see Golub and Van Loan 1983, pages 156-162, Gentleman 1974). This method has two main advantages:
- The loss of accuracy resulting from forming the crossproduct matrix
used in the equations for
is avoided.
- The n x p Jacobian need not be stored saving space
when
.
A weighted least squares fit can also be performed. This is appropriate
when the variance of in the nonlinear
regression model is not constant but instead is
. Here,
are weights input via the
user supplied function. For the weighted case, NonlinearRegression
finds the estimate by minimizing a weighted sum of squares error.
Nonlinear regression allows users to specify the model's functional form. This added flexibility can cause unexpected convergence problems for users who are unaware of the limitations of the algorithm. Also, in many cases, there are possible remedies that may not be immediately obvious. The following is a list of possible convergence problems and some remedies. No one-to-one correspondence exists between the problems and the remedies. Remedies for some problems may also be relevant for the other problems.
- A local minimum is found. Try a different starting value. Good
starting values can often be obtained by fitting simpler models. For
example, for a nonlinear function
good starting values can be obtained from the estimated linear regression coefficients
and
from a simple linear regression of ln y on ln x. The starting values for the nonlinear regression in this case would be
If an approximate linear model is unclear, then simplify the model by reducing the number of nonlinear regression parameters. For example, some nonlinear parameters for which good starting values are known could be set to these values. This simplifies the approach to computing starting values for the remaining parameters. - The estimate of
is incorrectly returned as the same or very close to the initial estimate.
- The scale of the problem may be orders of magnitude
smaller than the assumed default of 1 causing premature stopping. For
example, if the sums of squares for error is less than approximately
, the routine stops. See Example 3, which shows how to shut down some of the stopping criteria that may not be relevant for your particular problem and which also shows how to improve the speed of convergence by the input of the scale of the model parameters.
- The scale of the problem may be orders of magnitude larger than the assumed default causing premature stopping. The information with regard to the input of the scale of the model parameters in Example 3 is also relevant here. In addition, the maximum allowable step size MaxStepsize in Example 3 may need to be increased.
- The residuals are input with accuracy much less than machine accuracy, causing premature stopping because a local minimum is found. Again see Example 3 to see how to change some default tolerances. If you cannot improve the precision of the computations of the residual, you need to use method Digits to indicate the actual number of good digits in the residuals.
- The scale of the problem may be orders of magnitude
smaller than the assumed default of 1 causing premature stopping. For
example, if the sums of squares for error is less than approximately
- The model is discontinuous as a function of
. There may be a mistake in the user-supplied function. Note that the function
can be a discontinuous function of x.
- The R matrix value given by R is inaccurate. If only a function is supplied try providing the NonlinearRegression..::.IDerivative. If the derivative is supplied try providing only NonlinearRegression..::.IFunction.
- Overflow occurs during the computations. Make sure the user-supplied
functions do not overflow at some value of
.
- The estimate of
is going to infinity. A parameterization of the problem in terms of reciprocals may help.
- Some components of
are outside known bounds. This can sometimes be handled by making a function that produces artificially large residuals outside of the bounds (even though this introduces a discontinuity in the model function).
Note that the Solve(NonlinearRegression..::.IFunction) method must be called before using any property as a right operand, otherwise the value is null.