Usage Notes

Unconstrained Minimization

The unconstrained minimization problem can be stated as follows:

\[\min_{x \in R^n} f(x)\]

where \(f : \Re^n\rightarrow\Re\) is continuous and has derivatives of all orders required by the algorithms. The functions for unconstrained minimization are grouped into three categories: univariate functions, multivariate functions, and nonlinear least-squares functions.

For the univariate functions, it is assumed that the function is unimodal within the specified interval. For discussion on unimodality, see Brent (1973).

A quasi-Newton method is used for the multivariate function minUnconMultivar. The default is to use a finite-difference approximation of the gradient of \(f(x)\). Here, the gradient is defined to be the vector

\[\nabla f(x) = \left[ \frac{\partial f(x)}{\partial x_1}, \frac{\partial f(x)}{\partial x_2}, \ldots \frac{\partial f(x)}{\partial x_n} \right]\]

However, when the exact gradient can be easily provided, the keyword grad should be used.

The nonlinear least-squares function uses a modified Levenberg-Marquardt algorithm. The most common application of the function is the nonlinear data-fitting problem where the user is trying to fit the data with a nonlinear model.

These functions are designed to find only a local minimum point. However, a function may have many local minima. Try different initial points and intervals to obtain a better local solution.

Double-precision arithmetic is recommended for the functions when the user provides only the function values.

Linearly Constrained Minimization

The linearly constrained minimization problem can be stated as follows:

\[\begin{split}\begin{array}{l} \min\limits_{x \in R^n} f(x) \\ \text{subject to } \begin{array}{l} A_1x = b_1 \\ A_2x \leq b_2 \\ \end{array} \end{array}\end{split}\]

where \(f : R^n\rightarrow R\), \(A_1\) and \(A_2\) are coefficient matrices, and \(b_1\) and \(b_2\) are vectors. If \(f(x)\) is linear, then the problem is a linear programming problem. If \(f(x)\) is quadratic, the problem is a quadratic programming problem.

The function linearProgramming uses an active set strategy to solve linear programming problems, and is intended as a replacement for the function linProg. The two functions have similar interfaces, which should help facilitate migration from linProg to linearProgramming. In general, the function linearProgramming should be expected to perform more efficiently than linProg. Both linearProgramming and linProg are intended for use with small- to medium-sized linear programming problems. No sparsity is assumed since the coefficients are stored in full matrix form.

Function sparseLinProg uses an infeasible primal-dual interior-point method to solve sparse linear programming problems of all sizes. The constraint matrix is stored in sparse coordinate storage format.

The function quadraticProg is designed to solve convex quadratic programming problems using a dual quadratic programming algorithm. If the given Hessian is not positive definite, then quadraticProg modifies it to be positive definite. In this case, output should be interpreted with care because the problem has been changed slightly. Here, the Hessian of f(x) is defined to be the n × n matrix

\[\nabla^2 f(x) = \left[\frac{\partial^2}{\partial x_i \partial x_j} f(x)\right]\]

Function sparseQuadraticProg uses an infeasible primal-dual interior-point method to solve sparse convex quadratic programming problems of all sizes. The constraint matrix and the Hessian are stored in sparse coordinate storage format.

Nonlinearly Constrained Minimization

The nonlinearly constrained minimization problem can be stated as follows:

\[\begin{split}\begin{array}{l} \min\limits_{x \in R^n} f(x) \\ \text{subject to } g_i(x) = 0 \text{ for } i=1,2,\ldots m_1 \\ g_i(x) \geq 0 \text{ for } i = m_1 + 1, \ldots m \end{array}\end{split}\]

where \(f : R^n\rightarrow R\) and \(g_i : R^n\rightarrow R\), for \(i= 1,2,\ldots,m\).

The function constrainedNlp uses a sequential equality constrained quadratic programming algorithm to solve this problem. A more complete discussion of this algorithm can be found in the documentation.

Return Values from User-Supplied Functions

All values returned by user-supplied functions must be valid real numbers. It is the user’s responsibility to check that the values returned by a user-supplied function do not contain NaN, infinity, or negative infinity values.