arma¶
Computes least-square estimates of parameters for an ARMA model.
Synopsis¶
arma (z, p, q)
Required Arguments¶
- float
z[]
(Input) - Array of length
nObservations
containing the observations. - int
p
(Input) - Number of autoregressive parameters.
- int
q
(Input) - Number of moving average parameters.
Return Value¶
An array of length 1 + p
+ q
with the estimated constant, AR, and MA
parameters. If noConstant
is specified, the 0-th element of this array
is 0.0.
Optional Arguments¶
noConstant
or
constant
- If
noConstant
is specified, the time series is not centered about its mean,meanEstimate
. Ifconstant
, the default, is specified, the time series is centered about its mean. arLags
, int[]
(Input)Array of length
p
containing the order of the autoregressive parameters. The elements ofarLags
must be greater than or equal to 1.Default:
arLags
= [1, 2, …,p
]maLags
, int[]
(Input)Array of length
q
containing the order of the moving average parameters. ThemaLags
elements must be greater than or equal to 1.Default:
maLags
= [1, 2, …,q
]
methodOfMoments
or
leastSquares
If
methodOfMoments
is specified, the autoregressive and moving average parameters are estimated by a method of moments procedure. IfleastSquares
is specified, the autoregressive and moving average parameters are estimated by a least-squares procedure.Default:
methodOfMoments
is used.backcasting
, intmaxbc
, floattolerance
(Input)If
backcasting
is specified,maxbc
is the maximum length of backcasting and must be greater than or equal to 0. Argumenttolerance
is the tolerance level used to determine convergence of the backcast algorithm. Typically,tolerance
is set to a fraction of an estimate of the standard deviation of the time series.Default:
maxbc
= 10;tolerance
= 0.01 × standard deviation ofz
.relativeError
, float (Input)Stopping criterion for use in the nonlinear equation solver used in both the method of moments and least-squares algorithms.
Default:
relativeError
= 100 ×machine
(4).See documentation for function machine (Chapter 15, Utilities).
maxIterations
, int (Input)Maximum number of iterations allowed in the nonlinear equation solver used in both the method of moments and least-squares algorithms.
Default:
maxIterations
= 200.meanEstimate
, float (Input or Input/Output)On input,
meanEstimate
is an initial estimate of the mean of the time seriesz
. On return,meanEstimate
contains an update of the mean.If
noConstant
andleastSquares
are specified,meanEstimate
is not used in parameter estimation.initialEstimates
, floatar[]
, floatma[]
(Input)- If specified,
ar
is an array of length p containing preliminary estimates of the autoregressive parameters, andma
is an array of lengthq
containing preliminary estimates of the moving average parameters; otherwise, these are computed internally.initialEstimates
is only applicable ifleastSquares
is also specified. residual
(Output)An array of length
na = (
nObservations
- max (arLags
[i]) +maxbc
) containing the residuals (including backcasts) at the final parameter estimate point in the firstnObservations
- max(arLags
[i]) + nb, where nb is the number of values backcast.paramEstCov
(Output)- An array containing the variance-covariance matrix of the estimated ARMA
parameters and (optionally) of the estimated mean of series z. The size
of the array is np × np, where np =
p
+q
+ 1 if z is centered aboutz_mean
, and np =p
+q
if z is not centered. The ordering of variables inparamEstCov
ismeanEstimate
,ar
, andma
. Argumentnp
must be 1 or larger. autocov
(Output)- An array of length
p
+q
+ 2 containing the variance and autocovariances of the time series z. Argumentautocov
[0] contains the variance of the seriesz
. Argumentautocov
[k] contains the autocovariance at lag k, where k = 0, 1, …,p
+q
+ 1. ssResidual
(Output)- If specified,
ssResidual
contains the sum of squares of the random shock,ssResidual
=residual
[1]2 + … +residual
[na
]2, wherena
is equal to the number of residuals. varNoise
(Output)- If specified,
varNoise
contains the innovation variance of the series. armaInfo
(Output)- A structure that contains information necessary in the call to armaForecast.
Description¶
Function arma
computes estimates of parameters for a nonseasonal ARMA
model given a sample of observations, {Wt}, for t=1,2,…,n, where n = nObservations
. There are two methods, method of
moments and least squares, from which to choose. The default is method of
moments.
Two methods of parameter estimation, method of moments and least squares,
are provided. The user can choose the method of moments algorithm with the
optional argument methodOfMoments
. The least-squares algorithm is used
if the user specifies leastSquares
. If the user wishes to use the
least-squares algorithm, the preliminary estimates are the method of moments
estimates by default. Otherwise, the user can input initial estimates by
specifying optional argument initialEstimates
. The following table lists
the appropriate optional arguments for both the method of moments and
least-squares algorithm:
Method of Moments Only | Least Squares Only | Both Method of Moments and Least Squares |
---|---|---|
methodOfMoments |
leastSquares |
relativeError |
constant (or
noConstant) |
maxIterations |
|
arLags |
meanEstimate |
|
maLags |
autocov |
|
backcasting |
armaInfo |
|
initialEstimates |
||
residual |
||
paramEstCov |
||
ssResidual |
Method of Moments Estimation¶
Suppose the time series {Zt} is generated by an ARMA(p,q) model of the form
for t\in\{0,\pm 1,\pm 2,\ldots\}
Let \hat{\mu} =
meanEstimate
be the estimate of the mean μ of
the time series \{Z_t\}, where \hat{\mu} equals the
following:
The autocovariance function is estimated by
for k=0,1,\ldots,K, where K=p+q. Note that \hat{\sigma}(0) is an estimate of the sample variance.
Given the sample autocovariances, the function computes the method of moments estimates of the autoregressive parameters using the extended Yule-Walker equations as follows:
where
The overall constant \theta_0 is estimated by the following:
The moving average parameters are estimated based on a system of nonlinear equations given K=p+q+1 autocovariances, \sigma(k) for k=1,\ldots,K, and p autoregressive parameters \varphi_I for i=1,\ldots,p.
Let Z'_t=\phi(B)Z_t. The autocovariances of the derived moving average process Z'_t=\theta(B)A_t are estimated by the following relation:
The iterative procedure for determining the moving average parameters is based on the relation
where σ(k) denotes the autocovariance function of the original Z_t process.
Let \tau=\left( \tau_0,\tau_1,\ldots,\tau_q \right)^T and f=(f_0,f_1,\ldots,f_q)^T, where
and
Then, the value of \tau at the (i + 1)-th iteration is determined by the following:
The estimation procedure begins with the initial value
and terminates at iteration i when either \|f_i\| is less than
relativeError
or i equals maxIterations
. The moving average
parameter estimates are obtained from the final estimate of \tau by
setting
The random shock variance is estimated by the following:
See Box and Jenkins (1976, pp. 498-500) for a description of a function that performs similar computations.
Least-squares Estimation¶
Suppose the time series \{Z_t\} is generated by a nonseasonal ARMA model of the form,
where B is the backward shift operator, μ is the mean of Z_t, and
with p autoregressive and q moving average parameters. Without loss of generality, the following is assumed:
so that the nonseasonal ARMA model is of order (p',q'), where p'=l_\varphi(p) and q'=l_\theta(q). Note that the usual hierarchical model assumes the following:
Consider the sum-of-squares function
where
and T is the backward origin. The random shocks {A_t} are assumed to be independent and identically distributed
random variables. Hence, the log-likelihood function is given by
where f(\mu,\varphi,\theta) is a function of μ, φ, and θ.
For T=0, the log-likelihood function is conditional on the past values of both Z_t and A_t required to initialize the model. The method of selecting these initial values usually introduces transient bias into the model (Box and Jenkins 1976, pp. 210-211). For T=\infty, this dependency vanishes, and estimation problem concerns maximization of the unconditional log-likelihood function. Box and Jenkins (1976, p. 213) argue that
dominates
The parameter estimates that minimize the sum-of-squares function are called least-squares estimates. For large n, the unconditional least-squares estimates are approximately equal to the maximum likelihood-estimates.
In practice, a finite value of T will enable sufficient approximation of
the unconditional sum-of-squares function. The values of
\left[A_T\right] needed to compute the unconditional sum of squares
are computed iteratively with initial values of Z_t obtained by back
forecasting. The residuals (including backcasts), estimate of random shock
variance, and covariance matrix of the final parameter estimates also are
computed. ARIMA parameters can be computed by using
difference with arma
.
Examples¶
Example 1¶
Consider the Wolfer Sunspot Data (Anderson 1971, p. 660) consisting of the number of sunspots observed each year from 1749 through 1924. The data set for this example consists of the number of sunspots observed from 1770 through 1869. The method of moments estimates
for the ARMA(2, 1) model
where the errors A_t are independently normally distributed with mean zero and variance
from __future__ import print_function
from numpy import *
from pyimsl.stat.arma import arma
from pyimsl.stat.dataSets import dataSets
p = 2
q = 1
n_observations = 100
z = empty(100)
relative_error = 0.0
max_iterations = 0
w = dataSets(2)
for i in range(0, n_observations):
z[i] = w[21 + i][1]
parameters = arma(z, p, q,
relativeError=relative_error,
maxIterations=max_iterations)
print("AR estimates are %11.4f and %11.4f." % (parameters[1], parameters[2]))
print("MA estimate is %11.4f." % parameters[3])
Output¶
AR estimates are 1.2443 and -0.5751.
MA estimate is -0.1241.
Example 2¶
The data for this example are the same as that for the initial example. Preliminary method of moments estimates are computed by default, and the method of least squares is used to find the final estimates.
from __future__ import print_function
from numpy import *
from pyimsl.stat.arma import arma
from pyimsl.stat.dataSets import dataSets
p = 2
q = 1
n_observations = 100
z = empty(100)
w = dataSets(2)
for i in range(0, n_observations):
z[i] = w[21 + i][1]
parameters = arma(z, p, q,
leastSquares=True)
print("AR estimates are %11.4f and %11.4f." % (parameters[1], parameters[2]))
print("MA estimate is %11.4f." % parameters[3])
Output¶
AR estimates are 1.5313 and -0.8944.
MA estimate is -0.1320.
Warning Errors¶
IMSLS_LEAST_SQUARES_FAILED |
Least-squares estimation of the parameters has failed to converge. Solution from last iteration is returned. The estimates of the parameters at the last iteration may be used as new starting values. |
Fatal Errors¶
IMSLS_TOO_MANY_CALLS |
The number of calls to the function has
exceeded “itmax ”(“n”+1) =
%(i1). The user may try a new initial
guess. |
IMSLS_INCREASE_ERRREL |
The bound for the relative error,
“errrel ” = %(r1), is too small.
No further improvement in the
approximate solution is possible. The
user should increase “errrel ”. |
IMSLS_NEW_INITIAL_GUESS |
The iteration has not made good progress. The user may try a new initial guess. |