partialAutocorrelation

Computes the sample partial autocorrelation function of a stationary time series.

Synopsis

partialAutocorrelation (lagmax, cf)

Required Arguments

int lagmax (Input)
Maximum lag of partial autocorrelations to be computed.
float cf[] (Input)
Array of length lagmax + 1 containing the autocorrelations of the time series x.

Return Value

An array of length lagmax containing the partial autocorrelations of the time series x.

Description

Function partialAutocorrelation estimates the partial autocorrelations of a stationary time series given the K = lagmax sample autocorrelations

\[\hat{\rho}(k)\]

for \(k=0,1,\ldots,K\). Consider the AR(k) process defined by

\[X_t = \phi_{k1} X_{t-1} + \phi_{k2} X_{t-2} + \ldots + \phi_{kk} X_{t-k} + A_t\]

where \(\varphi_{kj}\) denotes the j‑th coefficient in the process. The set of estimates

\[\left\{\hat{\phi}_{kk}\right\}\]

for \(k=1,\ldots,K\) is the sample partial autocorrelation function. The autoregressive parameters

\[\left\{\hat{\phi}_{kj}\right\}\]

for \(j=1,\ldots,k\) are approximated by Yule-Walker estimates for successive AR(k) models where \(k=1,\ldots,K\). Based on the sample Yule-Walker equations

\[\hat{\rho}(j) = \hat{\phi}_{k1} \hat{\rho}(j-1) + \hat{\phi}_{k2} \hat{\rho}(j-2) + \ldots \hat{\phi}_{kk} \hat{\rho}(j-k), \phantom{...} j=1,2,\ldots,k\]

a recursive relationship for \(k=1,\ldots,K\) was developed by Durbin (1960). The equations are given by

\[\begin{split}\hat{\phi}_{kk} = \begin{cases} \hat{\rho}(1) & k = 1 \\ \frac {\hat{\rho}(k) - \displaystyle\sum_{j=1}^{k-1}\hat{\phi}_{k-1,j}\hat{\rho}(k-j)} {1 - \displaystyle\sum_{j=1}^{k-1}\hat{\phi}_{k-1,j}\hat{\rho}(j)} & k = 2,\ldots K \\ \end{cases}\end{split}\]

and

\[\begin{split}\hat{\phi}_{kj} = \begin{cases} \hat{\phi}_{k-1,j} - \hat{\phi}_{kk} \hat{\phi}_{k-1,k-j} & j=1,2,\ldots,k-1 \\ \hat{\phi}_{kk} & j = k \\ \end{cases}\end{split}\]

This procedure is sensitive to rounding error and should not be used if the parameters are near the nonstationarity boundary. A possible alternative would be to estimate \(\{\varphi_{kk}\}\) for successive AR(k) models using least or maximum likelihood. Based on the hypothesis that the true process is AR(p), Box and Jenkins (1976, page 65) note

\[\mathit{var}\left\{\hat{\phi}_{kk}\right\} \simeq \tfrac{1}{n} \phantom{...} k \geq p + 1\]

See Box and Jenkins (1976, pages 82‑84) for more information concerning the partial autocorrelation function.

Example

Consider the Wolfer Sunspot Data (Anderson 1971, page 660) consisting of the number of sunspots observed each year from 1749 through 1924. The data set for this example consists of the number of sunspots observed from 1770 through 1869. Function partialAutocorrelation is used to compute the estimated partial autocorrelations.

from numpy import *
from pyimsl.stat.autocorrelation import autocorrelation
from pyimsl.stat.dataSets import dataSets
from pyimsl.stat.partialAutocorrelation import partialAutocorrelation
from pyimsl.stat.writeMatrix import writeMatrix

nobs = 100
lagmax = 20
x = empty(100)

data = dataSets(2)
for i in range(0, nobs):
    x[i] = data[21 + i, 1]

ac = autocorrelation(x, lagmax)
partial = partialAutocorrelation(lagmax, ac)

writeMatrix("Lag      PACF", partial, column=True)

Output

 
 Lag      PACF
 1       0.8063
 2      -0.6345
 3       0.0783
 4      -0.0586
 5      -0.0009
 6       0.1717
 7       0.1086
 8       0.1100
 9       0.0785
10       0.0792
11       0.0687
12      -0.0378
13       0.0811
14       0.0334
15      -0.0348
16      -0.1306
17      -0.1549
18      -0.1191
19      -0.0162
20      -0.0039