Chapter 2: Regression > nonlinear_regression

nonlinear_regression

Fits a multivarite nonlinear regression model.

Synopsis

#include <imsls.h>

float *imsls_f_nonlinear_regression (float fcn(), int n_parameters, int n_observations, int n_independent, float x[], float y[], ..., 0)

The type double function is imsls_d_nonlinear_regression.

Required Arguments

float fcn (int n_independent, float xi[], int n_parameters, float theta[])
User-supplied function to evaluate the function that defines the nonlinear regression problem where xi is an array of length n_independent at which point the function is evaluated and theta is an array of length n_parameters containing the current values of the regression coefficients. Function fcn returns a predicted value at the point xi. In the following, f(xi;θ), or just fi, denotes the value of this function at the point xi, for a given value of θ. (Both xi and θ are arrays.)

int n_parameters   (Input)
Number of parameters to be estimated.

int n_observations   (Input)
Number of observations.

int n_independent   (Input)
Number of independent variables.

float x[]   (Input)
Array of size n_observations by n_independent containing the matrix of independent (explanatory) variables.

float y[]   (Input)
Array of length n_observations containing the dependent (response) variable.

Return Value

A pointer to an array of length n_parameters containing a solution,  for the nonlinear regression coefficients. To release this space, use imsls_free. If no solution can be computed, then NULL is returned.

Synopsis with Optional Arguments

#include <imsls.h>

float *imsls_f_nonlinear_regression (float fcn(), int n_parameters, int n_observations, int n_independent, float x[], float y[],
IMSLS_THETA_GUESS, float theta_guess[],
IMSLS_JACOBIAN, void jacobian(),
IMSLS_THETA_SCALE, float theta_scale[],
IMSLS_GRADIENT_EPS, float gradient_eps,
IMSLS_STEP_EPS, float step_eps,
IMSLS_SSE_REL_EPS, float sse_rel_eps,
IMSLS_SSE_ABS_EPS, float sse_abs_eps,
IMSLS_MAX_STEP, float max_step,
IMSLS_INITIAL_TRUST_REGION, float trust_region,
IMSLS_GOOD_DIGIT, int ndigit,
IMSLS_MAX_ITERATIONS, int max_itn,
IMSLS_MAX_SSE_EVALUATIONS, int max_sse_eval,
IMSLS_MAX_JACOBIAN_EVALUATIONS, int max_jacobian,
IMSLS_TOLERANCE, float tolerance,
IMSLS_PREDICTED, float **predicted,
IMSLS_PREDICTED_USER, float predicted[],
IMSLS_RESIDUAL, float **residual,
IMSLS_RESIDUAL_USER, float residual[],
IMSLS_R, float **r,
IMSLS_R_USER, float r[],
IMSLS_R_COL_DIM, int r_col_dim,
IMSLS_R_RANK, int *rank,
IMSLS_X_COL_DIM, int x_col_dim,
IMSLS_DF, int *df,
IMSLS_SSE, float *sse,
IMSLS_VARIANCE_COVARIANCE_MATRIX, float **var_covar,
IMSLS_VARIANCE_COVARIANCE_MATRIX_USER, float var_covar[],
IMSLS_RETURN_USER, float theta_hat[],
IMSLS_FCN_W_DATA, void fcn(),void *data,
IMSLS_JACOBIAN_W_DATA, void jacobian(), void *data,
0)

Optional Arguments

IMSLS_THETA_GUESS, float theta_guess[]   (Input)
Array with n_parameters components containing an initial guess.
Default: theta_guess[] = 0.

IMSLS_JACOBIAN, void jacobian (int n_independent, float xi[], int n_parameters, float theta[], float fjac[])   (Input/Output)
User-supplied function to compute the i-th row of the Jacobian, where the n_independent data values corresponding to the i-th row are input in xi. Argument theta is an array of length n_parameters containing the regression coefficients for which the Jacobian is evaluated, fjac is the computed n_parameters row of the Jacobian for observation i at theta. Note that each derivative f(xi)/θj should be returned in fjac [j  1] for j = 1, 2, ..., n_parameters.

IMSLS_THETA_SCALE, float theta_scale[]   (Input)
Array with n_parameters components containing the scaling array for θ. Array theta_scale is used mainly in scaling the gradient and the distance between two points. See keywords IMSLS_GRADIENT_EPS and IMSLS_STEP_EPS for more details.
Default: theta_scale[] = 1.

IMSLS_GRADIENT_EPS, float gradient_eps   (Input)
Scaled gradient tolerance. The j-th component of the scaled gradient at θ is calculated as

            where g = F(θ), t = theta_scale, and

            The value F(θ) is the sum of the squared residuals, SSE, at the point θ.

            Convergence is declared if gi * max{|theta(i)|, 1.0/scale(i)}/SSE is less than gradient_eps for i= 0, 1, 2, …, n_parameters, where gi is the i-th component of an internal intermediate gradient vector.
Default:

            ( in double, where ɛ is the machine precision)

IMSLS_STEP_EPS, float step_eps   (Input)
Scaled step tolerance. The j-th component of the scaled step from points θ and θʹ is computed as

            where t = theta_scale

Convergence is declared if |gn + i|/ max{|theta(i)|, 1.0/scale(i)} is less than step_eps for i = 0, 1, 2, …, n, where gn + i is the i-th component of the last step and n = n_parameters.

Default: step_eps = ɛ2/3,where ɛ is the machine precision

IMSLS_SSE_REL_EPS, float sse_rel_eps   (Input)
Relative SSE function tolerance.

     Convergence is declared if the change in SSE is less than or equal to sse_rel_eps * SSE in absolute value.

Default: sse_rel_eps = max(10-10, ɛ2/3), max(10-20, ɛ2/3) in double, where ɛ is the machine precision

IMSLS_SSE_ABS_EPS, float sse_abs_eps   (Input)
Absolute SSE function tolerance.

Convergence is declared if SSE is less than sse_abs_eps.

Default: sse_abs_eps = max(10-20,ɛ2), max(10-40, ɛ2) in double, where ɛ is the machine precision

IMSLS_MAX_STEP, float max_step   (Input)
Maximum allowable step size.


Default: max_step = 1000 max (ɛ1, ɛ2), where ɛ1 = (tTθ0)1/2, ɛ2 = ||t||2, t = theta_scale, and θ0 = theta_guess

IMSLS_INITIAL_TRUST_REGION, float trust_region   (Input)
Size of initial trust region radius. The default is based on the initial scaled Cauchy step.

IMSLS_GOOD_DIGIT, int ndigit   (Input)
Number of good digits in the function.
Default: machine dependent

IMSLS_MAX_ITERATIONS, int max_itn   (Input)
Maximum number of iterations.
Default: max_itn = 100

IMSLS_MAX_SSE_EVALUATIONS, int max_sse_eval   (Input)
Maximum number of SSE function evaluations.
Default: max_sse_eval = 400

IMSLS_MAX_JACOBIAN_EVALUATIONS, int max_jacobian   (Input)
Maximum number of Jacobian evaluations.
Default: max_jacobian = 400

IMSLS_TOLERANCE, float tolerance   (Input)
False convergence tolerance.
Default: tolerance = 100* eps, where eps = imsls_f_machine(4) if single precision and eps = imsls_d_machine(4) if double precision

IMSLS_PREDICTED, float **predicted   (Output)
Address of a pointer to a real internally allocated array of length n_observations containing the predicted values at the approximate solution.

IMSLS_PREDICTED_USER, float predicted[]   (Output)
Storage for array predicted is provided by the user. See IMSLS_PREDICTED.

IMSLS_RESIDUAL, float **residual   (Output)
Address of a pointer to a real internally allocated array of length n_observations containing the residuals at the approximate solution.

IMSLS_RESIDUAL_USER, float residual[]   (Output)
Storage for array residual is provided by the user. See IMSLS_RESIDUAL.

IMSLS_R, float **r   (Output)
Address of a pointer to an internally allocated array of size  n_parameters × n_parameters containing the R matrix from a QR decomposition of the Jacobian.

IMSLS_R_USER, float r[]   (Output)
Storage for array r is provided by the user. See IMSLS_R.

IMSLS_R_COL_DIM, int r_col_dim   (Input)
Column dimension of array r.
Default: r_col_dim = n_parameters

IMSLS_R_RANK, int *rank   (Output)
Rank of r. Argument rank less than n_parameters may indicate the model is overparameterized.

IMSLS_X_COL_DIM, int x_col_dim   (Input)
Column dimension of x.
Default: x_col_dim = n_independent

IMSLS_DF, int *df   (Output)
Degrees of freedom.

IMSLS_SSE, float *sse   (Output)
Residual sum of squares.

IMSLS_RETURN_USER, float theta_hat[]   (Output)
User-allocated array of length n_parameters containing the estimated regression coefficients.

IMSLS_VARIANCE_COVARIANCE_MATRIX,
float **var_covar   (Output)
Address of a pointer to an internally allocated array of size  n_parameters × n_parameters containing the variance/covariance matrix of the coefficients.

IMSLS_VARIANCE_COVARIANCE_MATRIX_USER,
float var_covar[]   (Output)
Storage for array var_covar is provided by the user.  See IMSLS_VARIANCE_COVARIANCE_MATRIX.

IMSLS_FCN_W_DATA, float fcn (int n_independent, float xi[], int n_parameters, float theta[]), void *data, (Input)
User-supplied function to evaluate the function that defines the nonlinear regression problem, which also accepts a pointer to data that is supplied by the user.  data is a pointer to the data to be passed to the user-supplied function.  See the Introduction, Passing Data to User-Supplied Functions” at the beginning of this manual for more details.

IMSLS_JACOBIAN_W_DATA, void jacobian (int n_independent, float xi[], int n_parameters, float theta[], float fjac[]), void *data, (Input)
User-supplied function to compute the i-th row of the Jacobian, which also accepts a pointer to data that is supplied by the user.  data is a pointer to the data to be passed to the user-supplied function.  See the Introduction, Passing Data to User-Supplied Functions” at the beginning of this manual for more details.

Description

Function imsls_f_nonlinear_regression fits a nonlinear regression model using least squares. The nonlinear regression model is

yi = f(xi; θ) + ɛi                      i = 1, 2, ..., n

where the observed values of the yi’s constitute the responses or values of the dependent variable, the known xi’s are the vectors of the values of the independent (explanatory) variables, θ is the vector of p regression parameters, and the ɛi’s are independently distributed normal errors with mean 0 and variance σ2. For this model, a least-squares estimate of θ is also a maximum likelihood estimate of θ.

The residuals for the model are as follows:

ei(θ) = yi – f(xi; θ) i = 1, 2, ..., n

A value of θ that minimizes

is a least-squares estimate of θ. Function imsls_f_nonlinear_regression is designed so that the values of the function f(xiθ) are computed one at a time by a user-supplied function.

Function imsls_f_nonlinear_regression is based on MINPACK routines LMDIF and LMDER by Moré et al. (1980) that use a modified Levenberg-Marquardt method to generate a sequence of approximations to a minimum point. Let

be the current estimate of θ. A new estimate is given by

where sc is a solution to the following:

Here

is the Jacobian evaluated at

The algorithm uses a “trust region” approach with a step bound of δc. A solution of the equations is first obtained for

μc = 0. If ||sc||2 < δc

this update is accepted; otherwise, μc is set to a positive value and another solution is obtained. The method is discussed by Levenberg (1944), Marquardt (1963), and Dennis and Schnabel (1983, pp. 129147, 218338).

If a user-supplied function is specified in IMSLS_JACOBIAN, the Jacobian is computed analytically; otherwise, forward finite differences are used to estimate the Jacobian numerically. In the latter case, especially if type float is used, the estimate of the Jacobian may be so poor that the algorithm terminates at a noncritical point. In such instances, the user should either supply a Jacobian function, use type double, or do both.

The first stopping criterion for imsls_f_nonlinear_regression occurs when SSE is less than the absolute function tolerance. The second stopping criterion occurs when the norm of the scaled gradient is less than the given gradient tolerance. The third stopping criterion occurs when the scaled distance between the last two steps is less than the step tolerance. The third stopping criterion also generates error IMSLS_LITTLE_FCN_CHANGE. The fourth stopping criterion occurs when the relative change in SSE is less than sse_rel_eps. The fourth stopping criterion also generates error code IMSLS_FALSE_CONVERGENCE. See Dennis and Schnabel (1983, pages 159−161, 278−280, and 347−348) for a discussion of stopping criteria and choice of tolerances.

On some platforms, imsls_f_nonlinear_regression can evaluate the user-supplied functions fcn and jacobian in parallel. This is done only if the function imsls_omp_options is called to flag user-defined functions as thread-safe. A function is thread-safe if there are no dependencies between calls. Such dependencies are usually the result of writing to global or static variables.

Programming Notes

Nonlinear regression allows substantial flexibility over linear regression because the user can specify the functional form of the model. This added flexibility can cause unexpected convergence problems for users that are unaware of the limitations of the software. Also, in many cases, there are possible remedies that may not be immediately obvious. The following is a list of possible convergence problems and some remedies. There is not a one-to-one correspondence between the problems and the remedies. Remedies for some problems also may be relevant for the other problems.

 

1.     A local minimum is found. Try a different starting value. Good starting values often can be obtained by fitting simpler models. For example, for a nonlinear function.

good starting values can be obtained from the estimated linear regression coefficients

and

from a simple linear regression of ln y on ln x. The starting values for the nonlinear regression in this case would be

If an approximate linear model is not clear, then simplify the model by reducing the number of nonlinear regression parameters. For example, some nonlinear parameters for which good starting values are known could be set to these values in order to simplify the model for computing starting values for the remaining parameters.

2.     The estimate of θ is incorrectly returned as the same or very close to the initial estimate. This occurs often because of poor scaling of the problem, which might result in the residual sum of squares being either very large or very small relative to the precision of the computer. The optional arguments allow control of the scaling.

3.     The model is discontinuous as a function of θ. (The function f(x;θ) can be a discontinuous function of x.)

4.     Overflow occurs during the computations. Make sure the user-supplied functions do not overflow at some value of θ.

5.     The estimate of θ is going to infinity. A parameterization of the problem in terms of reciprocals may help.

6.     Some components of θ are outside known bounds. This can sometimes be handled by making a function that produces artificially large residuals outside of the bounds (even though this introduces a discontinuity in the model function).

Examples

Example 1

In this example (Draper and Smith 1981, p. 518), the following nonlinear model is fit:

 

#include <math.h>

#include <imsls.h>

 

float fcn(int, float[], int, float[]);

 

int main ()

{

#define N_OBSERVATIONS 4

    int         n_independent  = 1;

    int         n_parameters   = 2;

    float       *theta_hat;

    float       x[N_OBSERVATIONS][1] = {10.0, 20.0, 30.0, 40.0};

    float       y[N_OBSERVATIONS] = {0.48, 0.42, 0.40, 0.39};

 

    imsls_omp_options(IMSLS_SET_FUNCTIONS_THREAD_SAFE, 1, 0);

 

                                /* Nonlinear regression */

    theta_hat = imsls_f_nonlinear_regression(fcn, n_parameters,

        N_OBSERVATIONS, n_independent, (float *)x, y, 0); 

 

                                /* Print estimates */

    imsls_f_write_matrix("estimated coefficients", 1, n_parameters,

        theta_hat, 0);

 

}                               /* End of main */

 

float fcn(int n_independent, float x[], int n_parameters, float theta[])

{

    return (theta[0] + (0.49 - theta[0])*exp(theta[1]*(x[0] - 8)));

}                               /* End of fcn */

Output

estimated coefficients

         1           2

    0.3807     -0.0794

Example 2

Consider the nonlinear regression model and data set discussed by Neter et al. (1983, pp. 475478):

There are two parameters and one independent variable. The data set considered consists of 15 observations.

 

#include <imsls.h>

#include <imsl.h>

#include <stdio.h>

#include <math.h>

static float fcn(int, float[], int, float[]);

static void jacobian(int, float[], int, float[], float[]);

 

int main()

{

   int df;

   int n_independent=1;

   int n_parameters = 2;

   int n_obs        = 15;

   float *theta_hat, *r, *y_hat, *var_covar;

   float grad_eps = 1.0e-9;

   float theta_guess[2] = {60.0, -0.03};

   float a, dfe, normalValue;

   float y[15] =

   {

      54.0, 50.0, 45.0, 37.0, 35.0,

      25.0, 20.0, 16.0, 18.0, 13.0,

      8.0, 11.0, 8.0, 4.0, 6.0

   };

   float x[15] =

   {

      2.0, 5.0, 7.0, 10.0, 14.0,

      19.0, 26.0, 31.0, 34.0, 38.0,

      45.0, 52.0, 53.0, 60.0, 65.0

   };

   char *fmt="%15.3f";

   char *dashes=

      "-------------------------------------------------------------";

 

   /* Nonlinear regression */

   theta_hat = imsls_f_nonlinear_regression(fcn, n_parameters,

      n_obs, n_independent, x, y,

      IMSLS_THETA_GUESS, theta_guess,

      IMSLS_GRADIENT_EPS, grad_eps,

      IMSLS_DF, &df,

      IMSLS_R, &r,

      IMSLS_PREDICTED, &y_hat,

      IMSLS_VARIANCE_COVARIANCE_MATRIX, &var_covar,

      IMSLS_JACOBIAN, jacobian, 0);

   /* Print results */

   /* Calculate and Print Coefficients & their 95% Confidence Limits */

   printf(" \n                   ESTIMATED COEFFICIENTS \n");

   printf("%s\n", dashes);

   printf(" Coefficient | Lower 95%% Limit |");

   printf(" Estimate | Upper 95%% Limit \n");

   dfe = (float) df;

   normalValue = imsls_f_t_inverse_cdf(0.975, dfe);

   a = normalValue * sqrt(var_covar[0]);

   printf("     Theta_1 | %10.3f      | %7.3f  | %12.3f\n",

         theta_hat[0] - a, theta_hat[0], theta_hat[0] + a);

   a = normalValue * sqrt(var_covar[3]);

   printf("     Theta_2 | %10.3f      | %7.3f  | %12.3f\n",

         theta_hat[1] - a, theta_hat[1], theta_hat[1] + a);

   printf("%s\n", dashes);

   imsls_f_write_matrix("Var/Covar matrix", n_parameters, n_parameters,

      var_covar, IMSLS_WRITE_FORMAT, fmt, 0);

   imsls_f_write_matrix("Predicted values", 1, n_obs,   y_hat,

      IMSLS_WRITE_FORMAT, "%7.2f", 0);

      return 0;

}

 

static float fcn(int n_independent, float x[],

                 int n_parameters, float theta[])

{

   return (theta[0]*exp(x[0]*theta[1]));

} /* End of fcn */

static void jacobian(int n_independent, float x[],

                     int n_parameters,  float theta[],

                     float fjac[])

{

   fjac[0] = exp(theta[1]*x[0]);

   fjac[1] = theta[0]*x[0]*exp(theta[1]*x[0]);

} /* End of jacobian */

Output

 

                  ESTIMATED COEFFICIENTS

-------------------------------------------------------------

 Coefficient | Lower 95% Limit | Estimate | Upper 95% Limit

     Theta_1 |     55.426      |  58.607  |       61.787

     Theta_2 |     -0.043      |  -0.040  |       -0.036

-------------------------------------------------------------

 

         Var/Covar matrix

                 1                2

1            2.167           -0.002

2           -0.002            0.000

 

                           Predicted values

      1        2        3        4        5        6        7        8

  54.15    48.08    44.42    39.45    33.67    27.62    20.94    17.18

 

      9       10       11       12       13       14       15

  15.26    13.02     9.87     7.48     7.19     5.45     4.40

Informational Errors

IMSLS_STEP_TOLERANCE                                     Scaled step tolerance satisfied. The current point may be an approximate local solution, but it is also possible that the algorithm is making very slow progress and is not near a solution or that “step_eps” is too big.

Warning Errors

IMSLS_LITTLE_FCN_CHANGE                              Both the actual and predicted relative reductions in the function are less than or equal to the relative function tolerance.

IMSLS_TOO_MANY_ITN                                         Maximum number of iterations exceeded.

IMSLS_TOO_MANY_JACOBIAN_EVAL        Maximum number of Jacobian evaluations exceeded.

IMSLS_UNBOUNDED                                                Five consecutive steps have been taken with the maximum step length.

IMSLS_FALSE_CONVERGENCE                              The iterates appear to be converging to a noncritical point.

Fatal Errors

IMSLS_TOO_MANY_FCN_EVAL                              Maximum number of function evaluations exceeded.


RW_logo.jpg
Contact Support