vectorAutoregression

Estimates a vector auto-regressive time series model with optional moving average components.

Synopsis

vectorAutoregression (y, p)

Required Arguments

float y[[]] (Input)
An array of size nObs by nCols containing the data.
int p (Input)
The autoregressive lag order.

Return Value

An array containing the estimated coefficients. The array has length nCols×(trend+nXvars+nCols×(p+q+1)) if a non-trivial \(A_0\) is included in the model. Otherwise, the array has length nCols×(trend+nXvars+nCols×(p+q)). trend = 1 if trend is specified, and 0 otherwise.

The array elements occur in this order:

  • If trend is specified, the trend coefficient vector (\(b_0\)) of length K (K=nCols).
  • The coefficient matrix for the deterministic variables (D), of size K by nXvars, oriented by column.
  • The p (or p+1) autoregressive coefficient matrices (A), each of size K by K, oriented by column.
  • The q moving average coefficient matrices (M), each of size K by K, oriented by column.

This arrangement corresponds to the vectorized coefficient matrix,

\[B = vec[b_0, D, A_0, A_1, …, A_p, M_1, M_2, …, M_q]\]

or

\[B = vec[b_0, D, A_1, …, A_p, M_1, M_2, …, M_q]\]

To release this space, use free. If no value can be computed returns None.

Optional Arguments

maLag, int (Input)

Fit a moving average component of order maLag.

Default: maLag = 0.

a0 (Input)
Indicates that the model has a non-trivial, lower-triangular leading AR operator, \(A_0\). See the Description section for more details. By default, \(A_0=I_k\), where K = nCols and \(I_k\) is the K by K identity matrix.
arModel, int[] (Input)

An array used to specify restrictions on the AR coefficient matrices. If a0 is present, arModel is of length K by K by (p+1). If a0 is not present, arModel is K by K by p. Each element of arModel should be one of {-1, 0, 1}. If a0 is present, indicating a non-trivial \(A_0\), the ordering of AR corresponds to the parameters as follows:

\[ \begin{align}\begin{aligned}\left[\alpha_{11,0},\alpha_{21,0},\ldots,\alpha_{K1,0},\ldots,\alpha_{1K,0},\alpha_{2K,0},\ldots,\alpha_{KK,0},\\\alpha_{11,1},\alpha_{21,0},\ldots,\alpha_{K1,1},\ldots,\alpha_{1K,1},\alpha_{2K,1},\ldots,\alpha_{KK,1},\ldots,\\\alpha_{1K,p},\alpha_{2K,p},\ldots,\alpha_{KK,p}\right]\end{aligned}\end{align} \]

If \(A_0\) is trivial (equal to the identity matrix), the ordering is the same but the elements \(\alpha_{Ki,0}\) are left out and the first element is \(\alpha_{11,1}\). If arModel[i]=1 or arModel[i]=-1, \(A(k,i)=\alpha_{ki,j}\) is estimated. If arModel[i]=0, the parameter is restricted to be equal to 0 or some other constant. Constants other than 0 can be specified in the optional argument arConstants. By default, all parameters are estimated.

Default: arModel[i] = 1.

maModel, int[] (Input)

An array of length K by K by (q+1)if a0, or K by K by q if not a0. maModel is used to specify restrictions on the moving-average coefficients. maModel is constructed analagously to the arModel restriction matrix detailed above. If a0, the first K by K by 1 entries in maModel are only used when arModel is not provided, because it is assumed that \(M_0=A_0\).

Default: maModel[i] = 1.

arConstants, float[] (Input)

An array of length K by (p+1)(or K by p) specifying the constants when certain of the AR parameters are restricted. Note that a lower triangular \(A_0\) is always assumed, with 1’s on the diagonal. However, constants for the lower triangle can be specified using this optional argument.

Default arConstants[i] = 0.

maConstants, float[] (Input)

An array of length K by (q+1)(or K by q) specifying the constants when certain of the MA parameters are restricted. Note that the leading MA coefficient matrix \(M_0=A_0\) and thus there is no specification for it.

Default maConstants[i] = 0.

presample, int (Input).

Specifies the number of rows of y to use as a presample in the estimation procedure. presample must be strictly greater than 0.

Default presample = maxLag.

maxLag, int (Input)

Specifies the maximum AR order or lag to use in the unrestricted VAR model. maxLag must be strictly greater than 0.

Default: maxLag = 6.

nSteps, int (Input)

Specifies the maximum number of steps ahead. When requested, forecasts are produced for times t+h where h = 1, 2, …, nSteps and t = presample, presample+1, …, nObs.

Default: nSteps = 4.

maxIterations, int (Input)

Specifies the maximum number of iterations. maxIterations > 0.

Default: maxIterations = 100.

tolerance, float (Input)

Specifies the error tolerance used in the iterations.

Default: tolerance = 100×machine(4).

trend (Input)

Indicates that a constant trend (intercept) term should be included in the model.

Default: No trend.

scale (Input)

Indicates that the data series in y should be centered and then scaled before the analysis begins.

Default: No scaling.

center (Input)

Indicates that the data series in y should be centered before the analysis begins. Note that if scale is provided, center has no effect.

Default: No centering.

xData, float[] (Input)

Input values of the deterministic variables.

The nObs by nXvars array containing the values of the deterministic variables. Note that if xData is provided when forecasts are requested, xData must have additional rows that can be used for the forecasts. That is, xData is an array of at least (nObs + nSteps) rows by nXvars.

Default: nXvars = 0.

errorCorrection, int (Input)

Estimate the error-correcting form of the VAR model assuming the series are integrated of order 1 and there are 0 <= errorCorrection <= nCols co-integrating relationships. When errorCorrection=0, the error-correcting model is not estimated.

Default: errorCorrection = 0.

causality, int s1, int s2 (Input)
Specification for a test of Granger causality.
float s1[] (Input)
An array of length sizeS1 containing the indices of the variables that are not Granger-caused by the variables specified in s2, under the null hypothesis. Each index must be between 1 and nCols inclusive and must not equal any index in s2.
float s2[] (Input)
An array of length nCols-sizeS1 containing the indices of the variables that do not Granger-cause the variables specified in s1, under the null hypothesis. Each index must be between 1 and nCols inclusive and must not equal any index in s1.
causalityStats (Output)
An array of length 2 containing the test statistic value and associated p-value resulting from the requested Granger causality test. It is an error to request this output without specifying the test using causality.
varInfo (Output)
Contains the regression information from the first stage fitting of the model, VAR(maxLag). This structure may be used as input to regressionSummary. See also regression optional argument regressionInfo.
varmaInfo (Output)
Contains the regression information from the second stage fitting of the model. VARMA(p,q). This structure may be used as input to regressionSummary. See also regression optional argument regressionInfo.
unitRoot (Output)
A complex array of length K by(p+q) containing the eigenvalues of the determinantal polynomial.
vecmCoef (Output)
An array of length K by K by (p+1)or (K by K by p) containing the estimated parameters when the model is put into the error-correcting form.
vecmEigenvalues (Output)
An array of length K containing the eigenvalues associated with each cointegrating rank r, \(0<=r<=K\).
vecmAlphabeta (Output)
An array of length 2×(K by errorCorrection) containing the estimates of the coefficient matrix \(\mathit{\Pi}=\alpha\beta '\) of the error-correction model. The first K by errorCorrection elements correspond to α and the second errorCorrection by K elements correspond to \(\beta'\).
forecasts (Output)
An array of length nObs by nCols by nSteps containing the 1, 2, …, nSteps ahead forecasts based on the final fitted VARMA model.
criteria (Output)
An array of size 4 containing AIC, BIC, HQ, and FPE fit critieria for the given model.
logLikelihood (Output)
Log–likelihood of the estimated VARMA(p,q) model.

Description

This function estimates a vector-autoregression moving average model using one and two-stage multivariate least squares regressions.

The general model can be written in operator notation as

\[A(L)(y_t-μ) = M(L)u_t + Dx_t\]

where \(y_t\) is a K-dimensional real-valued time series with stationary mean μ , \(u_t\) is a K-dimensional white noise series with a non-singular covariance matrix, \(\Sigma_u\), and \(x_t\) respresents a matrix of deterministic components such as trend or seasonal variables. For autoregression (arModel) lag parameter, \(p\geq 0\), and moving average (maModel) lag parameter, \(q\geq 0\), the operators are defined as

\[A(L) = A_0-A_1L-A_2L^2-…A_pL^p\]
\[M(L) = M_0-M_1L-M_2L^2-…M_qL^q\]

where L is the lag or backshift operator, defined as

\[Ly_t = y_{t-1}, L^ky_t = y_{t-k}\]

and the \(A_j\), \(M_j\) are K×K matrices of coefficients. That is,

\[A_j(k,i) = [α_{ki,j}], k,i = 1, …,K, j = 0, …, p\]
\[M_j(k,i) = [m_{ki,j}], k,i = 1, …,K, j = 0, …, q\]

The model has many equivalent forms, such as

\[A_0(y_t-\mu) = A_1(y_{t-1}-\mu) + \ldots + A_p(y_{t-p}-\mu) + A_0u_t + M_1u_{t-1} + \ldots + M_qu_{t-q}\]

A pure vector autoregression with \(q=0\) is often denoted \(VAR(p)\) and the autoregressive moving-average is \(VARMA(p,q)\).

The estimation procedures for the most part assume that the underlying time series is stable and invertible. These conditions are satisfied when

\[\mathrm{det}(I_K - A_1z - \ldots - A_pz^p) \neq 0 \text{ for } |z|≤ 1 \text{ (stability condition)}\]
\[\mathrm{det}(I_K + M_1z + \ldots + M_qz^q) \neq 0 \text{ for } |z|≤ 1 \text{ (invertibility condition)}.\]

In other words, the conditions for a stable and invertible process are that the roots of the determinantal polynomial are outside the (complex) unit circle, or equivalently, that the eigenvalues of the determinantal matrix have modulus less than 1. To provide for unit-root tests, this function returns the eigenvalues of the estimated model via the optional argument unitRoot. In the case of unit-roots, when some fall on the unit circle (\(|z|=1\)) there may still be stationary combinations of the variables. This behavior, known as cointegration, is discussed below.

Error-Correction form and Cointegration

Another form of expressing the \(VARMA(p,q)\) model is the error-correcting form:

\[A_0Δy_t = Πy_{t-1} + Γ(L)Δy_t + M(L)u_t\]

where

\[Π = -(A_0-A_1-…-A_p),\]
\[Γ(L) = (Γ_1L + Γ_2L^2 + … + Γ_pL^p), and\]
\[M(L) = (M_0 + M_1L + … + M_qL^q) .\]

The error-correcting or error-correction model (ECM) was developed for economic variables that tend to track close to each other due to common trends. The concept is closely aligned with that of cointegration. A K-dimensional process is said to be integrated of order 1, denoted as \(y_t ~ I(1)\), when \(y_t\) is non-stationary while the first-differenced series, \(\delta y_t\), is stationary. In this case, there may exist a stationary linear combination in the levels (non-differenced versions). That is, there may exist a

\[z_t = β'y_t = β_{1t}y_{1t} + β_{2t}y_{2t} + \ldots β_{Kt}y_{Kt},\]

where at least one \(\beta_{it}\neq 0\) such that \(z_t\) is stationary.

The interpretation is that cointegrated variables have common trends that cause them to move together in some sense. Consider the price of a commodity in two different locations. At their individual levels, they are non-stationary, unpredictable processes, but market forces keep the two prices from being too far apart so the difference in their prices is stationary.

We obtain an estimate of each coefficient matrix in the ECM form of the model. Then we are interested in the decomposition of \(\Pi=\alpha\beta '\) where α, β are K×errorCorrection (rank errorCorrection coefficient matrices). From the ECM form it can be deduced that \(z_t=\beta'y_t\) is the cointegrating relationship under the given assumptions. However, the cointegrating rank r is not known in practice. As discussed in Lütkepohl, 2007, Ch. 7, the most common tests for the correct cointegrating rank are

\[H_0 : r = r_0 vs H_1 : r_0 < r ≤ K\]

and

\[H_0 : r = r_0 vs H_1 : r = r_0 + 1\]

The likelihood ratio test statistic for testing the first hypothesis is known as the trace test, and the second is known as the maximum eigenvalue test (Johansen 1988, 1995). In particular,

\[\begin{split}\begin{array}{l} \lambda\left(r_0,K\right) = -T \sum\limits_{i=r_0+1}^{K} \ln \left(1 - \lambda_i\right) \text{ and } \\ \lambda\left(r_0,r_0+1\right) = -T \ln \left(1 - \lambda_{r_0+1}\right) \\ \end{array}\end{split}\]

where \(\lambda_1\geq\ldots \geq\lambda _K\) are eigenvalues of a particular matrix associated with the likelihood function. To allow for various tests, this function returns the eigenvalues \(\lambda_1\geq\ldots \geq\lambda _K\) through the optional argument vecmEigenvalues.

The Granger-Causality Test

Partition the K- dimensional time series as \(y'_t=(z_t,x_t)'\), where \(z_t\) is of length m and \(x_t\) is of length \(K-m\). In general, \(x_t\) is said to Granger-cause \(z_t\) when predictions for \(z_t\) can be improved by taking \(x_t\) into account, and vice-versa. In the context of a stable and invertible \(VAR(p)\) or \(VARMA(p,q)\) process, a test for Granger causality amounts to testing for certain 0 restrictions on the AR or MA coefficients. Given a partition \(y'_t=(z_t,x_t)'\), the test of

\[H_0 : x_t \text{ does not Granger-cause } z_t\]
\[H_1 : x_t \text{ does Granger-cause } z_t\]

is performed using a version of the Wald statistic which has an approximate F-distribution,

\[λ_W / N ≈ F(N,T - K(p + q) - 1)\]

where N is the number of 0 restrictions induced by the test. This function returns the test statistic and associated p-value when a test is specified through the optional argument causality.

Comments

  1. There are different notational conventions in the literature. Box, Jenkins, and Reinsel typically use (Φ,Θ) in place of (A, M) and B instead of L for the backshift or lag operator. Because it is the main reference for this implementation, we follow the notation used by Lütkepohl (2007).
  2. The two-stage regression approach is robust in the sense that results are produced even when there are roots close to 1 or less than 1 (in modulus). The estimates can be tested for unstable roots using the optional argument unitRoot.
  3. In general a collection of K time series \(y_t=(y_{1t},y_{2t},\ldots,y_{kt})'\) is said to be cointegrated of order \((d-b)\) when they are each individually integrated of order d, but there exists a linear combination \(z_t=\beta'y_t=\beta_{1t} y_{1t}+\beta_{2t} y_{2t}+\ldots \beta_{Kt} y_{Kt}\), with \(\beta_{it}\neq 0\) for at least one i, which is integrated of order b. In this implementation, only the case where \(d=1\) and \(b=0\) is considered. An even more general situation allows the individual time series to have differing orders of integration. For more details, see, for example, Engle and Granger, (1991).

Examples

Example 1

In this example we use a small data set with 2 time series. A VARMA(1,0) model is requested.

from numpy import *
from pyimsl.stat.vectorAutoregression import vectorAutoregression
from pyimsl.stat.writeMatrix import writeMatrix

y = [[0, 0, ],
     [0.148105526820657, 0.0507420782620461],
     [-1.13674727366735, 0.862892721126079],
     [1.54366183541037, -1.13804802266371],
     [-0.0923211737957721, 1.65649055346038],
     [0.521764564424907, -2.11208211206815],
     [0.843683397890515, 2.56656798707681],
     [-2.70330819114831, -2.83452914666041],
     [4.93683704766295, 3.95965377457266],
     [-4.78314880243807, -2.23136673998374],
     [6.24911547448071, 0.40543051970714],
     [-6.76582567372196, 0.816818897274206],
     [6.21142944093012, -4.247694573354],
     [-5.29817270483491, 5.08246614505046],
     [4.19557583927549, -5.35697380907112],
     [-3.21572784971078, 7.89716072829777],
     [0.485717553966479, -8.25930665413043],
     [2.69490292018773, 10.9017252520684],
     [-5.41090143363388, -10.400477539227],
     [8.29423327234419, 9.10321370415527]]
p = 1
coef = vectorAutoregression(y, p)
writeMatrix("A1", coef.reshape(2, 2), transpose=True)

Output

 
            A1
             1            2
1       -1.017       -0.296
2        0.273       -1.053

Example 2

In this example, we fit the unrestricted VARMA(1,1) model and request 1 and 2 step ahead forecasts for each observation.

from numpy import *
from pyimsl.stat.vectorAutoregression import vectorAutoregression
from pyimsl.stat.writeMatrix import writeMatrix

y = array([0.143280117400598, 0.691517079320758, -1.054002748442,
           1.89368500251305, -0.595435384283415, -1.53385197609914,
           -0.450087415747451, -0.522758296071267, 0.338719815948114,
           1.73890633300759, -0.0420130132503371, -0.00933890926865466,
           -0.939527210947825, -0.279211957506804, 0.985111509818102,
           -0.185903912738057, -0.779058630583496, -1.52829111438157,
           -0.0444517646926054, -0.605935345974391, 1.17870357395841,
           0.979822006409954, -1.75509211790604, -1.02617250037744,
           -2.64443752185661, -0.405444884498378, -0.558146570012133,
           1.17006139568333, -0.14877513906561, 1.67436399145195,
           1.21251151094695, -0.236411746432856, -0.319260279201159,
           1.53774676549506, -0.798919508505848, -1.25907348772775,
           -0.43488363747126, 1.18754392780486, 2.49394567166528,
           -0.505392075680617, -0.939902530453777, -1.3000118234638,
           0.308365204823071, -0.0346715133254558, 0.155821836363299,
           1.53865066350577, -0.446013548645569, 0.421382795732249,
           -0.810472750765929, -0.475790066827614, -1.21547965787564,
           0.873092852598299, 0.314687304446453, -0.166494291509063,
           1.20846773815425, -0.21319122737697, 0.885697825416125,
           1.06749862455588, -0.417811475765902, 1.81350258917296,
           1.06312903931625, -0.357098401483029, -2.54962241395723,
           1.58241298127273, -0.445333714405381, -1.54921054521057,
           0.763932954657703, -0.459132068443737, -0.135020632775658,
           0.768987806710127, -2.21000914520305, -0.416578967811937,
           -0.787803924930718, -0.381994679294779, 0.651886558080692,
           -0.296275077784937, -1.34100151116192, -0.695511844572305,
           -1.38111408367782, -0.453263483406621, 0.546836142779091,
           1.35617195687341, 1.47675672302528, -0.879275538764735,
           1.33724523036386, -1.19560541543992, -0.355724070126155,
           -1.35440645398376, 0.215557787562229, -0.0705280128055462,
           2.64174061572298, 0.622022977819918, 1.85193603585307,
           1.1425463214074, 1.47034707550646, -0.274392879035261,
           -0.0256438865664471, -1.32768501270967, 1.4002259906097,
           -1.13393234222336, 0.420685154939234, -1.14369662556446,
           -0.229525100642792, -1.14556440309372, -0.0726173507068031,
           0.537003424016424, 2.21081287812981, -0.329847605902733,
           0.658135565199115, 1.25161597712012, 0.324279854980608,
           -0.122777642651712, -0.746398274242844, -0.510384971856952,
           -1.52397549713935, -0.590932096161281, 0.351879848132452,
           -0.486815223667361, -0.582122931146207, -0.129708090826525,
           -1.83261132721672, -0.438855077878885, 0.160891671954672,
           0.130903505342111, -1.0930038744212, 0.858667240391389,
           -0.650140333935879, -0.192590383440759, -0.495902099869389,
           -0.0970274914548782, 0.271597266914655, -1.25629606008009,
           -1.82411099200514, -0.0862331652538604, -1.48736902275701,
           -0.702589236231933, 1.66371111576656, 0.260453048198016,
           0.472465295167692, 2.03959666287312, -1.47220802239244,
           0.584452376204567, 0.29538916638251, -0.424471774108761,
           1.35117053520231, 0.792966672512426, 0.559666965721712,
           1.03877148575442, 0.32764651319845, 0.792431599069095,
           1.79713629328279, 2.53306185903747, 0.382061987152509,
           -0.55974023663989, 0.261351966632211, 0.928359586004826,
           1.05805881312766, -0.448798293155081, -2.8433140252059,
           -1.29380365284521, 1.60167210548413, -0.58790657908656,
           -0.0697276516437701, 0.669259446155372, -0.756109095074059,
           -1.04262502361173, -0.689533522981508, 0.322514092974764,
           -0.62456134593389, 0.343601164613668, 0.406496690190247,
           -0.579352431691941, -0.38067184267295, 1.15818332237678,
           -1.3763494217139, 1.07842256464695, -0.607885118048254,
           -0.551750338671028, -0.688013574614753, -0.66192239892944,
           0.840882344143739, 0.501181908666563, 0.810882707408453,
           -0.373132840815414, -1.53884108045858, -0.0475950419868607,
           -1.11456391432642, -1.39312192248506, 0.374292584707849,
           0.307055843720151, 0.0883771102062163, 1.51499635303431,
           0.544284404231116, 1.62863647405725, 0.666268752934375,
           3.15259591439161, 0.535584045927088, 0.438326104669433,
           1.25375087298954, 1.2784768691421])
y = reshape(y, (100, 2))

p = 1
q = 1
maxStep = 2
forecasts = []
# Call vectorAutoregression using only the first half of y[]
coef = vectorAutoregression(y[0:50, :], p,
                            maLag=q,
                            nSteps=maxStep,
                            forecasts=forecasts)

colLabels = ["t", "t+1,Y1", "t+1,Y2 ", "t+2, Y1", "t+2, Y2"]
writeMatrix("* * * Forecasts * * *\n", reshape(forecasts, (50, 4)),
            colLabels=colLabels)

Output

 
                * * * Forecasts * * *

t        t+1,Y1      t+1,Y2       t+2, Y1      t+2, Y2
 1        0.000        0.000        0.000        0.000
 2        0.000        0.000        0.000        0.000
 3        0.000        0.000        0.000        0.000
 4        0.000        0.000        0.000        0.000
 5        0.000        0.000        0.000        0.000
 6        0.000        0.000        0.000        0.000
 7        0.009       -0.739        0.002       -0.086
 8       -0.501        0.893       -0.017       -0.050
 9        0.773       -1.779        0.024       -0.479
10       -0.935        0.042       -0.038       -0.251
11        0.164        0.959       -0.067        0.306
12        0.393       -1.377        0.009       -0.263
13       -0.672       -0.873        0.013       -0.058
14       -0.447        0.644        0.036        0.368
15        0.187        1.540        0.027        0.689
16        0.897       -0.117        0.191       -0.172
17        0.020        0.621       -0.039        0.556
18        0.008       -1.719        0.079       -0.444
19       -1.059        0.790       -0.056        0.397
20        0.193        0.264        0.035       -0.319
21        0.194       -1.050       -0.121       -0.377
22       -0.415        0.529       -0.042        0.039
23        0.433        0.658        0.048        0.495
24        0.074        0.493        0.044        0.235
25        0.402       -1.035        0.114       -0.230
26       -0.634        0.428       -0.073        0.351
27        0.109       -0.094        0.059       -0.061
28       -0.026        0.133        0.009       -0.164
29        0.067        1.258       -0.078        0.435
30        0.677        0.780        0.105        0.674
31        0.210       -0.299        0.145       -0.250
32       -0.200       -0.262       -0.086        0.671
33       -0.656       -1.024        0.134       -0.676
34       -0.351       -0.130       -0.169       -0.188
35       -0.152        0.708       -0.084        0.377
36        0.351       -1.377        0.122       -0.147
37       -0.826       -0.399       -0.026       -0.125
38       -0.256        0.717       -0.018       -0.026
39        0.716       -0.898        0.071       -0.139
40       -0.316       -1.153        0.058       -0.236
41       -0.804        1.543       -0.100        0.486
42        0.765        0.139        0.071       -0.257
43        0.443       -0.054        0.039       -0.434
44        0.345       -1.797       -0.050       -0.503
45       -1.165        0.730       -0.124       -0.028
46        0.531        1.303       -0.022        0.174
47        0.765        1.889        0.014        0.390
48        1.281        0.188        0.069       -0.089
49        0.306       -1.741        0.017       -0.569
50       -1.140        0.239       -0.172       -0.441

Example 3

In this example we fit the following VARMA(1,1) restricted model on simulated data of two dimensions:

\[\begin{split}\begin{pmatrix} 1 & 0 \\ -\alpha_{21,0} & 1 \\ \end{pmatrix} y_t - \begin{pmatrix} \alpha_{11,1} & \alpha_{12,1} \\ \alpha_{21,1} & \alpha_{22,1} \\ \end{pmatrix} y_{t-1} = \begin{pmatrix} 1 & 0 \\ -\alpha_{21,0} & 1 \\ \end{pmatrix} u_t + \begin{pmatrix} 0 & m_{12,1} \\ m_{21,1} & 0 \\ \end{pmatrix} u_{t-1}\end{split}\]

In this specification there are 7 free parameters:

\[\gamma = (α_{21,0}, α_{11,1}, α_{21,1}, α_{12,1}, α_{22,1}, α_{21,1}, α_{12,1})'\]

The first stage and the second stage results are shown. From the second stage results, we see that (to two decimals) the estimate is

\[\hat{\gamma} = (-0.03, 0.00, 0.05, -0.01, -0.06, 0.01)'\]
from numpy import *
from pyimsl.stat.vectorAutoregression import vectorAutoregression
from pyimsl.stat.regressionSummary import regressionSummary
from pyimsl.stat.writeMatrix import writeMatrix

colLabels = ["id", "coef", "SE", "t-stat", "p-value"]
y = array([0.143280117400598, 0.691517079320758, -1.054002748442,
           1.89368500251305, -0.595435384283415, -1.53385197609914,
           -0.450087415747451, -0.522758296071267, 0.338719815948114,
           1.73890633300759, -0.0420130132503371, -0.00933890926865466,
           -0.939527210947825, -0.279211957506804, 0.985111509818102,
           -0.185903912738057, -0.779058630583496, -1.52829111438157,
           -0.0444517646926054, -0.605935345974391, 1.17870357395841,
           0.979822006409954, -1.75509211790604, -1.02617250037744,
           -2.64443752185661, -0.405444884498378, -0.558146570012133,
           1.17006139568333, -0.14877513906561, 1.67436399145195,
           1.21251151094695, -0.236411746432856, -0.319260279201159,
           1.53774676549506, -0.798919508505848, -1.25907348772775,
           -0.43488363747126, 1.18754392780486, 2.49394567166528,
           -0.505392075680617, -0.939902530453777, -1.3000118234638,
           0.308365204823071, -0.0346715133254558, 0.155821836363299,
           1.53865066350577, -0.446013548645569, 0.421382795732249,
           -0.810472750765929, -0.475790066827614, -1.21547965787564,
           0.873092852598299, 0.314687304446453, -0.166494291509063,
           1.20846773815425, -0.21319122737697, 0.885697825416125,
           1.06749862455588, -0.417811475765902, 1.81350258917296,
           1.06312903931625, -0.357098401483029, -2.54962241395723,
           1.58241298127273, -0.445333714405381, -1.54921054521057,
           0.763932954657703, -0.459132068443737, -0.135020632775658,
           0.768987806710127, -2.21000914520305, -0.416578967811937,
           -0.787803924930718, -0.381994679294779, 0.651886558080692,
           -0.296275077784937, -1.34100151116192, -0.695511844572305,
           -1.38111408367782, -0.453263483406621, 0.546836142779091,
           1.35617195687341, 1.47675672302528, -0.879275538764735,
           1.33724523036386, -1.19560541543992, -0.355724070126155,
           -1.35440645398376, 0.215557787562229, -0.0705280128055462,
           2.64174061572298, 0.622022977819918, 1.85193603585307,
           1.1425463214074, 1.47034707550646, -0.274392879035261,
           -0.0256438865664471, -1.32768501270967, 1.4002259906097,
           -1.13393234222336, 0.420685154939234, -1.14369662556446,
           -0.229525100642792, -1.14556440309372, -0.0726173507068031,
           0.537003424016424, 2.21081287812981, -0.329847605902733,
           0.658135565199115, 1.25161597712012, 0.324279854980608,
           -0.122777642651712, -0.746398274242844, -0.510384971856952,
           -1.52397549713935, -0.590932096161281, 0.351879848132452,
           -0.486815223667361, -0.582122931146207, -0.129708090826525,
           -1.83261132721672, -0.438855077878885, 0.160891671954672,
           0.130903505342111, -1.0930038744212, 0.858667240391389,
           -0.650140333935879, -0.192590383440759, -0.495902099869389,
           -0.0970274914548782, 0.271597266914655, -1.25629606008009,
           -1.82411099200514, -0.0862331652538604, -1.48736902275701,
           -0.702589236231933, 1.66371111576656, 0.260453048198016,
           0.472465295167692, 2.03959666287312, -1.47220802239244,
           0.584452376204567, 0.29538916638251, -0.424471774108761,
           1.35117053520231, 0.792966672512426, 0.559666965721712,
           1.03877148575442, 0.32764651319845, 0.792431599069095,
           1.79713629328279, 2.53306185903747, 0.382061987152509,
           -0.55974023663989, 0.261351966632211, 0.928359586004826,
           1.05805881312766, -0.448798293155081, -2.8433140252059,
           -1.29380365284521, 1.60167210548413, -0.58790657908656,
           -0.0697276516437701, 0.669259446155372, -0.756109095074059,
           -1.04262502361173, -0.689533522981508, 0.322514092974764,
           -0.62456134593389, 0.343601164613668, 0.406496690190247,
           -0.579352431691941, -0.38067184267295, 1.15818332237678,
           -1.3763494217139, 1.07842256464695, -0.607885118048254,
           -0.551750338671028, -0.688013574614753, -0.66192239892944,
           0.840882344143739, 0.501181908666563, 0.810882707408453,
           -0.373132840815414, -1.53884108045858, -0.0475950419868607,
           -1.11456391432642, -1.39312192248506, 0.374292584707849,
           0.307055843720151, 0.0883771102062163, 1.51499635303431,
           0.544284404231116, 1.62863647405725, 0.666268752934375,
           3.15259591439161, 0.535584045927088, 0.438326104669433,
           1.25375087298954, 1.2784768691421])
y = reshape(y, (100, 2))

p = 1
q = 1
maxLag = 6
arModel = [0, -1, 0, 0, 1, 1, 1, 1]
maModel = [0, -1, 0, 0, 0, 1, 1, 0]
arConstants = [1, 0, 0, 1, 0, 0, 0, 0]
maConstants = [1, 0, 0, 1, 0, 0, 0, 0]
stage1VarInfo = []
stage2VarmaInfo = []
coef = vectorAutoregression(y, p,
                            maLag=q,
                            a0=True,
                            arModel=arModel,
                            maModel=maModel,
                            maxLag=maxLag,
                            arConstants=arConstants,
                            maConstants=maConstants,
                            varInfo=stage1VarInfo,
                            varmaInfo=stage2VarmaInfo)


nCols = 2
n_coef1 = nCols * maxLag
coefTTests = []
for k in range(nCols):
    regressionSummary(stage1VarInfo[0],
                      indexRegression=k,
                      coefTTests=coefTTests)
    writeMatrix("* * * VAR Stage 1 Coefficients * * *\n",
                coefTTests, colLabels=colLabels)

# stage 2 restricted model
# estimates 1 A0 parameter, 4 A1 parameters, and 2 M1 parameters
nCoef2 = 7
coefTTests2 = []
regressionSummary(stage2VarmaInfo[0],
                  coefTTests=coefTTests2)
writeMatrix("* * * VARMA Stage 2 Coefficients * * *\n",
            coefTTests2, colLabels=colLabels)

Output

 
         * * * VAR Stage 1 Coefficients * * *

id         coef           SE       t-stat      p-value
 1        0.229        0.110        2.075        0.041
 2        0.292        0.125        2.345        0.021
 3       -0.274        0.110       -2.501        0.014
 4       -0.242        0.127       -1.896        0.062
 5        0.372        0.116        3.208        0.002
 6        0.280        0.128        2.198        0.031
 7       -0.027        0.116       -0.232        0.817
 8       -0.349        0.128       -2.722        0.008
 9        0.240        0.109        2.208        0.030
10        0.296        0.128        2.316        0.023
11       -0.145        0.110       -1.323        0.190
12       -0.144        0.126       -1.136        0.259
 
         * * * VAR Stage 1 Coefficients * * *

id         coef           SE       t-stat      p-value
 1        0.182        0.099        1.840        0.069
 2        0.177        0.112        1.577        0.119
 3       -0.287        0.099       -2.907        0.005
 4        0.064        0.115        0.561        0.577
 5        0.075        0.104        0.721        0.473
 6        0.066        0.115        0.579        0.564
 7       -0.111        0.104       -1.066        0.290
 8       -0.193        0.115       -1.675        0.098
 9       -0.066        0.098       -0.675        0.501
10        0.018        0.115        0.160        0.873
11        0.034        0.099        0.348        0.729
12        0.092        0.114        0.813        0.418
 
        * * * VARMA Stage 2 Coefficients * * *

id         coef           SE       t-stat      p-value
 1       -0.099        0.232       -0.429        0.668
 2        0.130        0.097        1.346        0.180
 3       -0.050        0.208       -0.240        0.811
 4       -0.618        0.242       -2.559        0.011
 5        0.123        0.115        1.068        0.287
 6        0.204        0.257        0.791        0.430
 7        0.914        0.270        3.388        0.001