IMSL C# Numerical Library

KalmanFilter Class

Performs Kalman filtering and evaluates the likelihood function for the state-space model.

For a list of all members of this type, see KalmanFilter Members.

System.Object
   Imsl.Stat.KalmanFilter

public class KalmanFilter

Thread Safety

Public static (Shared in Visual Basic) members of this type are safe for multithreaded operations. Instance members are not guaranteed to be thread-safe.

Remarks

Class KalmanFilter is based on a recursive algorithm given by Kalman (1960), which has come to be known as the KalmanFilter. The underlying model is known as the state-space model. The model is specified stage by stage where the stages generally correspond to time points at which the observations become available. KalmanFilter avoids many of the computations and storage requirements that would be necessary if one were to process all the data at the end of each stage in order to estimate the state vector. This is accomplished by using previous computations and retaining in storage only those items essential for processing of future observations.

The notation used here follows that of Sallas and Harville (1981). Let y_k (input in y using method Update) be the n_k \times 1 vector of observations that become available at time k. The subscript k is used here rather than t, which is more customary in time series, to emphasize that the model is expressed in stages k = 1, 2, \ldots and that these stages need not correspond to equally spaced time points. In fact, they need not correspond to time points of any kind. The observation equation for the state-space model is

y_k = Z_kb_k + e_k\,\,\,\,\,k = 1, 2, \ldots

Here, Z_k (input in z using method update) is an n_k \times q known matrix and b_k is the q \times 1 state vector. The state vector b_k is allowed to change with time in accordance with the state equation

b_{k+1} = T_{k+1}b_k + w_{k+1} \,\,\,\,\, 
            k = 1, 2, \ldots

starting with b_1 = \mu_1 + w_1.

The change in the state vector from time k to k + 1 is explained in part by the transition matrix T_{k+1} (the identity matrix by default, or optionally using method SetTransitionMatrix), which is assumed known. It is assumed that the q-dimensional w_ks (k = 1, 2,\ldots) are independently distributed multivariate normal with mean vector 0 and variance-covariance matrix \sigma^2Q_{k}, that the n_k-dimensional e_ks (k = 1, 2,\ldots) are independently distributed multivariate normal with mean vector 0 and variance-covariance matrix \sigma^2R_{k}, and that the w_ks and e_ks are independent of each other. Here, \mu_1is the mean of b_1 and is assumed known, \sigma^2 is an unknown positive scalar. Q_{k+1} (input in Q) and R_k (input in R) are assumed known.

Denote the estimator of the realization of the state vector b_k given the observations y_1, y_2, \ldots, y_j by

\hat \beta _{k|j}

By definition, the mean squared error matrix for

\hat \beta _{k|j}

is

\sigma ^2 C_{\left. k \right|j}  = E(\hat 
            \beta _{\left. k \right|j}  - b_k ) (\hat \beta _{\left. k \right|j} 
            - b_k )^T

At the time of the k-th invocation, we have

\hat \beta _{k|k-1}

and

C_{k\left| {k - 1} \right.}, which were computed from the k-1-st invocation, input in b and covb, respectively. During the k-th invocation, KalmanFilter computes the filtered estimate

\hat \beta _{k|k}

along with C_{k\left| k \right.}. These quantities are given by the update equations:

\hat \beta _{k|k}  = \hat \beta _{k|k-1}  + 
            C_{k|k-1}Z_k^{T}H_k^{-1}v_k

C_{\left. k \right|k}  = C_{\left. k \right|k 
            - 1}  - C_{\left. k \right|k - 1} Z_k^T H_k^{ - 1} Z_k C_{\left. k 
            \right|k - 1}

where

v_k  = y_k  - Z_k \hat \beta _{\left. k 
            \right|k - 1}

and where

H_k  = R_k  + Z_k C_{\left. k \right|k - 1} 
            Z_k^T

Here, v_k (stored in v) is the one-step-ahead prediction error, and \sigma^2{H_k} is the variance-covariance matrix for v_k. H_k is stored in covv. The "start-up values" needed on the first invocation of KalmanFilter are

\hat \beta _{1\left| 0 \right.} = \mu _{_1 }

and C_{1\left| 0 \right.} = Q{}_1 input via b and covb, respectively. Computations for the k-th invocation are completed by KalmanFilter computing the one-step-ahead estimate

\hat \beta _{k+1|k}

along with C_{k + 1\left| k \right.} given by the prediction equations:

\hat \beta _{k + \left. 1 \right|k} = 
            T_{k + 1} \hat \beta _{\left. k \right|k}

C_{k+1|k} = T_{k+1}C_{k|k}T_{k+1}^{T} + 
            Q_{k+1}

If both the filtered estimates and one-step-ahead estimates are needed by the user at each time point, KalmanFilter can be used twice for each time point-first without methods SetTransitionMatrix and SetQ to produce

\hat \beta _{\left. k \right|k}

and C_{k\left| k \right.}, and second without method Update to produce

\hat \beta _{k + \left. 1 \right|k}

and C_{k + 1\left| k \right.} (Without methods SetTransitionMatrix and SetQ, the prediction equations are skipped. Without method update, the update equations are skipped.).

Often, one desires the estimate of the state vector more than one-step-ahead, i.e., an estimate of

\hat \beta _{k|j}

is needed where k \gt j + 1. At time j, KalmanFilter is invoked with method Update to compute

\hat \beta _{j + 1\left| j \right.}

Subsequent invocations of KalmanFilter without method Update can compute

\hat \beta _{j+2|j}, \, \hat \beta _{j+3|j}, 
            \, \dots , \, \hat \beta _{k|j}

Computations for

\hat \beta _{\left. k \right|j}

and C_{k \left| j \right.} assume the variance-covariance matrices of the errors in the observation equation and state equation are known up to an unknown positive scalar multiplier, \sigma^2. The maximum likelihood estimate of \sigma^2 based on the observations y_1, y_2, \ldots, y_m, is given by

\hat \sigma^2  = SS/N

where

N = \sum\nolimits _{k = 1}^m n_k 
            \,\,{\rm{and}}\,\,SS = \sum\nolimits _{k = 1}^m v_k^T H_k^{ - 1} v_k

N and SS are the input/output arguments n and sumOfSquares.

If \sigma^2 is known, the R_ks and Q_ks can be input as the variance-covariance matrices exactly. The earlier discussion is then simplified by letting \sigma^2 = 1.

In practice, the matrices T_k, Q_k, and R_k are generally not completely known. They may be known functions of an unknown parameter vector \theta. In this case, KalmanFilter can be used in conjunction with an optimization class (see class MinUnconMultiVar, IMSL C# Library Math namespace), to obtain a maximum likelihood estimate of \theta. The natural logarithm of the likelihood function for y_1, y_2, \ldots, y_m differs by no more than an additive constant from

L(\theta ,\sigma ^2 ;y_1 ,y_2 ,\; \ldots 
            ,\;y_m ) = - \frac{1}{2}N\,{\rm{ln}}\, \sigma ^{\rm{2}}  - 
            \frac{1}{2}\sum\limits_{k = 1}^m {{\rm{ln}}[{\rm{det}}(H_k )] - 
            \frac{1}{2}\sigma ^{ - 2} \sum\limits_{k = 1}^m {v_k^T H_k^{ - 1} 
            v_k } }

(Harvey 1981, page 14, equation 2.21).

Here,

\sum\nolimits_{k = 1}^m 
            {{\rm{ln}}[{\rm{det}}(H_k )]}

(stored in logDeterminant) is the natural logarithm of the determinant of V where \sigma^2V is the variance-covariance matrix of the observations.

Minimization of -2L(\theta, \sigma^2; y_1, y_2, \ldots, y_m) over all \theta and \sigma^2 produces maximum likelihood estimates. Equivalently, minimization of -2L_c(\theta; y_1, y_2, \ldots, y_m) where

L_c (\theta ;y_1 ,y_2 ,\; \ldots ,\;y_m ) = 
            - \frac{1}{2}N\,{\rm{ln}}\left( {\frac{{SS}}{N}} \right) - 
            \frac{1}{2}\sum\limits_{k = 1}^m {{\rm{ln}}[{\rm{det}}(H_k )]}

produces maximum likelihood estimates

\hat \theta \, {\rm{ and }} \, \hat \sigma ^2  
            = SS/N

Minimization of -2L_c(\theta; y_1, y_2, \ldots, y_m) instead of -2L(\theta, \sigma^2; y_1, y_2, \ldots, y_m), reduces the dimension of the minimization problem by one. The two optimization problems are equivalent since

\hat \sigma ^2 (\theta ) = SS(\theta )/N

minimizes -2L(\theta, \sigma^2; y_1, y_2, \ldots, y_m) for all \theta, consequently,

\hat \sigma ^{^2 } \left( \theta  \right)

can be substituted for \sigma^2 in L(\theta, \sigma^2; y_1, y_2, \ldots, y_m) to give a function that differs by no more than an additive constant from L_c(\theta; y_1, y_2, \ldots, y_m).

The earlier discussion assumed H_k to be nonsingular. If H_k is singular, a modification for singular distributions described by Rao (1973, pages 527-528) is used. The necessary changes in the preceding discussion are as follows:

  1. Replace H_k^{ - 1} by a generalized inverse.
  2. Replace det(H_k) by the product of the nonzero eigenvalues of H_k.
  3. Replace N by \sum\nolimits_{k = 1}^m 
               {{\rm{rank}}\left( {H_k } \right)}

Maximum likelihood estimation of parameters in the Kalman filter is discussed by Sallas and Harville (1988) and Harvey (1981, pages 111-113).

Requirements

Namespace: Imsl.Stat

Assembly: ImslCS (in ImslCS.dll)

See Also

KalmanFilter Members | Imsl.Stat Namespace | Example