KALMN
Performs Kalman filtering and evaluates the likelihood function for the state‑space model.
Required Arguments
Y — Vector of length NY containing the observations. (Input)
Z — NY by NB matrix relating the observations to the state vector in the observation equation. (Input)
R — NY by NY matrix such that R * σ2 is the variance‑covariance matrix of errors in the observation equation. (Input)
σ2 is a positive unknown scalar. Only elements in the upper triangle of R are referenced.
B — Estimated state vector of length NB. (Input/Output)
The input is the estimated state vector at time k given the observations thru time k ‑ 1. The output is the estimated state vector at time k + 1 given the observations thru time k. On the first call to KALMN, the input B must be the prior mean of the state vector at time 1.
COVB — NB by NB matrix such that COVB * σ2 is the mean squared error matrix for B. (Input/Output)
Before the first call to KALMN, COVB * σ2 must equal the variance‑covariance matrix of the state vector.
N — Rank of the variance‑covariance matrix for all the observations. (Input/Output)
N must be initialized to zero before the first call to KALMN. In the usual case when the variance‑covariance matrix is nonsingular, N equals the sum of the NY’s from the invocations to KALMN.
SS — Generalized sum of squares. (Input/Output)
SS must be initialized to zero before the first call to KALMN. The estimate of σ2 is given by SS/N.
ALNDET — Natural log of the product of the nonzero eigenvalues of P where P * σ2 is the variance‑covariance matrix of the observations. (Input/Output)
Although ALNDET is computed, KALMN avoids the explicit computation of P. ALNDET must be initialized to zero before the first call to KALMN. In the usual case when P is nonsingular, ALNDET is the natural log of the determinant of P.
Optional Arguments
NY — Number of observations for current update. (Input)
If NY = 0, no update is performed.
Default: NY = size (Y,1).
NB — Number of elements in the state vector. (Input)
Default: NB = size (Z,2).
LDZ — Leading dimension of Z exactly as specified in the dimension statement in the calling program. (Input)
Default: LDZ = size (Z,1).
LDR — Leading dimension of R exactly as specified in the dimension statement in the calling program. (Input)
Default: LDR = size (R,1).
IT — Transition matrix option. (Input)
Default: IT = 1.
IT | Action |
---|
0 | T is the transition matrix in the state equation. |
1 | The identity is the transition matrix in the state equation. |
T — NB by NB transition matrix in the state equation. (Input, if IT = 0)
If IT = 1, then T is not referenced and can be a vector of length one.
LDT — Leading dimension of T exactly as specified in the dimension statement in the calling program. (Input)
Default: LDT = size (T,1).
IQ — State equation error option. (Input)
Default: IQ = 1.
IQ | Action |
---|
0 | There is an error term in the state equation. |
1 | There is no error term in the state equation. |
Q — NB by NB matrix such that Q * σ2 is the variance‑covariance matrix of the error vector in the state equation. (Input, if IQ = 0)
σ2 is a positive unknown scalar. If IQ = 1, then Q is not referenced and can be a 1x1 array. If IQ = 0, only the elements in the upper triangle of Q are referenced.
LDQ — Leading dimension of Q exactly as specified in the dimension statement in the calling program. (Input)
Default: LDQ = size (Q,1).
TOL — Tolerance used in determining linear dependence. (Input)
TOL = 100.0
* AMACH(4) is a common choice. See the documentation for routine
AMACH in the
Reference Material.
Default:
TOL = 1.e‑5 for single precision and 2.d –14 for double precision.
LDCOVB — Leading dimension of COVB exactly as specified in the dimension statement in the calling program. (Input)
Default: LDCOVB = size (COVB,1),
V — Vector of length NY containing the one‑step‑ahead prediction error. (Output)
If Y is not needed, then V and Y can occupy the same storage locations.
COVV — NY by NY matrix such that COVV * σ2 is the variance‑covariance matrix of V. (Output)
If R is not needed, then COVV and R can occupy the same storage locations.
LDCOVV — Leading dimension of COVV exactly as specified in the dimension statement in the calling program. (Input)
FORTRAN 90 Interface
Generic: CALL KALMN (Y, Z, R, B, COVB, N, SS, ALNDET [, …])
Specific: The specific interface names are S_KALMN and D_KALMN.
FORTRAN 77 Interface
Single: CALL KALMN (NY, Y, NB, Z, LDZ, R, LDR, IT, T, LDT, IQ, Q, LDQ, TOL, B, COVB, LDCOVB, N, SS, ALNDET, V, COVV, LDCOVV)
Double: The double precision name is DKALMN.
Description
Routine KALMN is based on a recursive algorithm given by Kalman (1960), which has come to be known as the Kalman filter. The underlying model is known as the state‑space model. The model is specified stage by stage where the stages generally correspond to time points at which the observations become available. The routine KALMN avoids many of the computations and storage requirements that would be necessary if one were to process all the data at the end of each stage in order to estimate the state vector. This is accomplished by using previous computations and retaining in storage only those items essential for processing of future observations.
The notation used here follows that of Sallas and Harville (1981). Let yk (input in Y) be the nk × 1 vector of observations that become available at time k. The subscript k is used here rather than t, which is more customary in time series, to emphasize that the model is expressed in stages k = 1, 2, … and that these stages need not correspond to equally spaced time points. In fact, they need not correspond to time points of any kind. The observation equation for the state‑space model is
yk = Zkbk + ek k = 1, 2, …
Here, Zk (input in Z) is an nk × q known matrix and bk is the q × 1 state vector. The state vector bk is allowed to change with time in accordance with the state equation
bk+1 = Tk+1bk + wk+1 k = 1, 2, …
starting with b1 = μ1 + w1.
The change in the state vector from time k to k + 1 is explained in part by the transition matrix Tk+1(input in T), which is assumed known. It is assumed that the q‑dimensional wk’s (k = 1, 2, ).are independently distributed multivariate normal with mean vector 0 and variance‑covariance matrix σ2Qk, that the nk‑dimensional ek’s (k = 1, 2,).are independently distributed multivariate normal with mean vector 0 and variance‑covariance matrix σ2Rk, and that the wk’s and ek’s are independent of each other. Here, μ1 is the mean of b1 and is assumed known, σ2 is an unknown positive scalar. Qk+1 (input in Q) and Rk (input in R) are assumed known.
Denote the estimator of the realization of the state vector bk given the observations y1, y2, …, yj by
By definition, the mean squared error matrix for
is
At the time of the k‑th invocation, we have
and Ck∣k−1, which were computed from the (k‑1)-st invocation, input in B and COVB, respectively. During the k‑th invocation, routine KALMN computes the filtered estimate
along with Ck∣k. These quantities are given by the update equations:
where
and where
Here, vk (stored in V) is the one‑step‑ahead prediction error, and σ2Hk is the variance‑covariance matrix for vk. Hk is stored in COVV. The “start‑up values” needed on the first invocation of KALMN are
and C1∣0 = Q1 input via B and COVB, respectively. Computations for the k‑th invocation are completed by KALMN computing the one‑step‑ahead estimate
along with Ck+1∣k given by the prediction equations:
If both the filtered estimates and one‑step‑ahead estimates are needed by the user at each time point, KALMN can be invoked twice for each time point—first with IT = 1 and IQ = 1 to produce
and Ck∣k, and second with NY = 0 to produce
and Ck+1∣k (With IT = 1 and IQ = 1, the prediction equations are skipped. With NY = 0, the update equations are skipped.)
Often, one desires the estimate of the state vector more than one‑step‑ahead, i.e., an estimate of
is needed where k > j + 1. At time j, KALMN is invoked to compute
Subsequent invocations of KALMN with NY = 0 can compute
Computations for
and Ck∣j assume the variance‑covariance matrices of the errors in the observation equation and state equation are known up to an unknown positive scalar multiplier, σ2. The maximum likelihood estimate of σ2 based on the observations y1, y2, …, ym, is given by
where
If σ2 is known, the Rk’s and Qk’s can be input as the variance‑covariance matrices exactly. The earlier discussion is then simplified by letting σ2 = 1.
In practice, the matrices Tk, Qk, and Rk are generally not completely known. They may be known functions of an unknown parameter vector θ. In this case, KALMN can be used in conjunction with an optimization program (see routine UMINF, (IMSL MATH/LIBRARY)) to obtain a maximum likelihood estimate of θ. The natural logarithm of the likelihood function for y1, y2, …, ym differs by no more than an additive constant from
(Harvey 1981, page 14, equation 2.21). Here,
(stored in ALNDET) is the natural logarithm of the determinant of V where σ2V is the variance‑covariance matrix of the observations.
Minimization of ‑2L(θ, σ2; y1, y2, …, ym) over all θ and σ2 produces maximum likelihood estimates. Equivalently, minimization of ‑2Lc(θ; y1, y2, …, ym) where
produces maximum likelihood estimates
The minimization of ‑2Lc(θ; y1, y2, …, ym) instead of ‑2L(θ, σ2; y1, y2, …, ym), reduces the dimension of the minimization problem by one. The two optimization problems are equivalent since
minimizes ‑2L(θ, σ2; y1, y2, …, ym) for all θ, consequently,
can be substituted for σ2 in L(θ, σ2; y1, y2, …, ym) to give a function that differs by no more than an additive constant from Lc(θ; y1, y2, …, ym).
The earlier discussion assumed Hk to be nonsingular. If Hk is singular, a modification for singular distributions described by Rao (1973, pages 527–528) is used. The necessary changes in the preceding discussion are as follows:
1. Replace
by a generalized inverse.
2. Replace det(Hk) by the product of the nonzero eigenvalues of Hk.
3. Replace N by
Maximum likelihood estimation of parameters in the Kalman filter is discussed by Sallas and Harville (1988) and Harvey (1981, pages 111–113).
Comments
1. Workspace may be explicitly provided, if desired, by use of K2LMN/DK2LMN. The reference is:
CALL K2LMN (NY, Y, NB, Z, LDZ, R, LDR, IT, T, LDT, IQ, Q, LDQ, TOL, B, COVB, LDCOVB, N, SS, ALNDET, V, COVV, LDCOVV, COVVCH, WK1, WK2)
The additional arguments are as follows.
COVVCH — Work vector of length NY * NY containing the Cholesky factor of the COVV matrix. If R and COVV are not needed, COVVCH, R, and COVV can occupy the same storage locations and LDR must equal LDCOVV.
WK1 — Work vector of length NB * NB.
WK2 — Work vector of length NB * NY + max(NB, NY).
2. Informational errors
Type | Code | Description |
---|
4 | 1 | R + Z * COVB * ZT is not nonnegative definite within the tolerance defined by TOL. Either TOL is too small, or R or COVB is not nonnegative definite. |
4 | 2 | The system of equations COVVCHT * x = V is inconsistent. The variance‑covariance matrix of the observations is inconsistent with the observations input in Y. |
4 | 3 | The system of equations COVVCHT * x = Z * COVB is inconsistent. The Cholesky factorization to compute COVVCH may be based on too large a value for TOL. The input of a smaller value for TOL may be appropriate. |
3. If R, Q, and T are known functions of unknown parameters, KALMN can be used in conjunction with routine UMINF (IMSL MATH/LIBRARY) to perform maximum likelihood estimation of these unknown parameters. UMINF should be used to minimize the function
N * ALOG(SS/N) + ALNDET
4. In order to maintain acceptable numerical accuracy, the double precision version of KALMN is usually required.
Examples
Example 1
Routine KALMN is used to compute the filtered estimates and one‑step‑ahead estimates for a scalar problem discussed by Harvey (1981, pages 116–117). The observation equation and state equation are given by
where the ek’s are identically and independently distributed normal with mean 0 and variance σ2, the wk’s are identically and independently distributed normal with mean 0 and variance 4σ2, and b1is distributed normal with mean 4 and variance 16σ2. Two invocations of KALMN are needed for each time point in order to compute the filtered estimate and the one‑step‑ahead estimate. The first invocation uses Default: IQ = 1 and IT = 1 so that the prediction equations are skipped in the computations. The second invocation uses NY = 0 so that the update equations are skipped in the computations.
This example also computes the one‑step‑ahead prediction errors. Harvey (1981, page 117) contains a misprint for the value v4 that he gives as 1.197. The correct value of v4 = 1.003 is computed by KALMN.
USE UMACH_INT
USE KALMN_INT
IMPLICIT NONE
INTEGER LDCOVB, LDCOVV, LDQ, LDR, LDT, LDZ, NB, NOBS, NY
PARAMETER (NB=1, NOBS=4, NY=1, LDCOVB=NB, LDCOVV=NY, LDQ=NB, &
LDR=NY, LDT=NB, LDZ=NY)
!
INTEGER I, IQ, IT, N, NOUT
REAL ALNDET, B(NB), COVB(LDCOVB,NB), &
COVV(LDCOVV,NY), Q(LDQ,NB), R(LDR,NY), SS, T(LDT,NB), &
V(NY), Y(NY), YDATA(NOBS), Z(LDZ,NB)
!
DATA YDATA/4.4, 4.0, 3.5, 4.6/, Z/1.0/, R/1.0/, Q/4.0/, T/1.0/
!
CALL UMACH (2, NOUT)
! Initial estimates for state vector
! and variance-covariance matrix.
! Initialize SS and ALNDET.
B(1) = 4.0
COVB(1,1) = 16.0
N = 0
SS = 0.0
ALNDET = 0.0
WRITE (NOUT,99998)
!
DO 10 I=1, NOBS
! Update
Y(1) = YDATA(I)
CALL KALMN (Y, Z, R, B, COVB, N, SS, ALNDET, T=T, Q=Q, V=V, &
COVV=COVV)
WRITE (NOUT,99999) I, I, B(1), COVB(1,1), N, SS, ALNDET, &
V(1), COVV(1,1)
! Prediction
IQ = 0
IT = 0
CALL KALMN (Y, Z, R, B, COVB, N, SS, ALNDET, NY=0, IT=IT, T=T, &
IQ=IQ, Q=Q, V=V, COVV=COVV)
WRITE (NOUT,99999) I + 1, I, B(1), COVB(1,1), N, SS, ALNDET, &
V(1), COVV(1,1)
10 CONTINUE
99998 FORMAT (' k/j', ' B ', ' COVB ', ' N', ' SS ', &
' ALNDET ', ' V ', ' COVV ')
99999 FORMAT (I2, '/', I1, 2F8.3, I2, 4F8.3)
END
Output
k/j B COVB N SS ALNDET V COVV
1/1 4.376 0.941 1 0.009 2.833 0.400 17.000
2/1 4.376 4.941 1 0.009 2.833 0.400 17.000
2/2 4.063 0.832 2 0.033 4.615 -0.376 5.941
3/2 4.063 4.832 2 0.033 4.615 -0.376 5.941
3/3 3.597 0.829 3 0.088 6.378 -0.563 5.832
4/3 3.597 4.829 3 0.088 6.378 -0.563 5.832
4/4 4.428 0.828 4 0.260 8.141 1.003 5.829
5/4 4.428 4.828 4 0.260 8.141 1.003 5.829
Example 2
Routine
KALMN is used with routine
UMINF (IMSL MATH/LIBRARY) to find a maximum likelihood estimate of the parameter
θ in a MA(1) time series represented by
yk =
ɛk ‑ θɛk−1. Routine
RNARM (see
Chapter 18, “Random Number Generation”) is used to generate 200 random observations from an MA(1) time series with
θ = 0.5 and
σ2 = 1.
The MA(1) time series is cast as a state‑space model of the following form (see Harvey 1981, pages 103–104, 112):
where the two‑dimensional wk’s are independently distributed bivariate normal with mean 0 and variance σ2 Qk and
The warning error that is printed as part of the output is not serious and indicates that UMINF is generally used for multi‑parameter minimization.
USE RNSET_INT
USE RNARM_INT
USE UMINF_INT
USE UMACH_INT
IMPLICIT NONE
INTEGER NOBS, NTHETA
PARAMETER (NOBS=200, NTHETA=1)
!
INTEGER IADIST, IPARAM(7), ISEED, LAGAR(1), LAGMA(1), NOUT, &
NPAR, NPMA
REAL A(NOBS+1), AVAR, CNST, FSCALE, FVALUE, PAR(1), &
PMA(1), RPARAM(7), THETA(NTHETA), WI(1), XGUESS(1), &
XSCALE(1), YDATA(NOBS)
COMMON /MA1/ YDATA
EXTERNAL FCN
!
ISEED = 123457
CALL RNSET (ISEED)
PMA(1) = 0.5
LAGMA(1) = 1
CNST = 0.0
NPAR = 0
NPMA = 1
IADIST = 0
AVAR = 1.0
CALL RNARM (CNST, PAR, LAGAR, PMA, LAGMA, &
IADIST, AVAR, A, WI, YDATA, NPAR=NPAR)
! Use UMINF to find maximum likelihood
! estimate of the MA parameter THETA.
CALL UMINF (FCN, THETA)
CALL UMACH (2, NOUT)
WRITE (NOUT,*) ' '
WRITE (NOUT,*) '* * * Final Estimate for THETA * * *'
WRITE (NOUT,*) 'Maximum likelihood estimate, THETA = ', THETA(1)
END
! Use KALMN to evaluate the likelihood.
SUBROUTINE FCN (NTHETA, THETA, FUNC)
USE KALMN_INT
INTEGER NTHETA
REAL THETA(NTHETA), FUNC
!
INTEGER LDCOVB, LDCOVV, LDQ, LDR, LDT, LDZ, NB, NOBS, NY
PARAMETER (NB=2, NOBS=200, NY=1, LDCOVB=NB, LDCOVV=NY, LDQ=NB, &
LDR=NY, LDT=NB, LDZ=NY)
!
INTEGER I, IQ, IT, N
REAL ABS, ALNDET, ALOG, B(NB), COVB(LDCOVB,NB), &
COVV(LDCOVV,NY), Q(LDQ,NB), R(LDR,NY), SS, T(LDT,NB), &
TOL, V(NY), Y(NY), YDATA(NOBS), Z(LDZ,NB)
COMMON /MA1/ YDATA
INTRINSIC ABS, ALOG
!
DATA T/0.0, 0.0, 1.0, 0.0/, Z/1.0, 0.0/
!
IF (ABS(THETA(1)) .GT. 1.0) THEN
! Estimate out of parameter space.
! Set function to a large number.
FUNC = 1.E10
RETURN
END IF
IQ = 0
Q(1,1) = 1.0
Q(1,2) = -THETA(1)
Q(2,1) = -THETA(1)
Q(2,2) = THETA(1)**2
IT = 0
! No error in the
! observation equation.
R(1,1) = 0.0
! Initial estimates for state vector
! and variance-covariance matrix.
! Initialize SS and ALNDET.
B(1) = 0.0
B(2) = 0.0
COVB(1,1) = 1.0 + THETA(1)**2
COVB(1,2) = -THETA(1)
COVB(2,1) = -THETA(1)
COVB(2,2) = THETA(1)**2
N = 0
SS = 0.0
ALNDET = 0.0
!
DO 10 I=1, NOBS
Y(1) = YDATA(I)
CALL KALMN (Y, Z, R, B, COVB, N, SS, ALNDET, IT=IT, T=T, &
IQ=IQ, Q=Q)
10 CONTINUE
FUNC = N*ALOG(SS/N) + ALNDET
RETURN
END
Output
*** WARNING ERROR 1 from U5INF. This routine may be inefficient for a
*** problem of size N = 1.
Here is a traceback of subprogram calls in reverse order:
Routine name Error type Error code
------------ ---------- ----------
U5INF 6 1 (Called internally)
U3INF 0 0 (Called internally)
U2INF 0 0 (Called internally)
UMINF 0 0
USER 0 0
* * * Final Estimate for THETA * * *
Maximum likelihood estimate, THETA = 0.452944