garch¶
Computes estimates of the parameters of a GARCH(p,*q*) model.
Synopsis¶
garch (p, q, y, xguess)
Required Arguments¶
- int
p
(Input) - Number of GARCH parameters.
- int
q
(Input) - Number of ARCH parameters.
- float
y[]
(Input) - Array of length
m
containing the observed time series data. - float
xguess[]
(Input) - Array of length
p
+q
+ 1 containing the initial values for the parameter arrayx[]
.
Return Value¶
The parameter array x[]
of length p
+ q
+ 1 containing
the estimated values of sigma squared, followed by the q
ARCH
parameters, and the p
GARCH parameters.
Optional Arguments¶
maxSigma
, float (Input)Value of the upperbound on the first element (sigma) of the array of returned estimated coefficients.
Default = 10.
a
(Output)- Value of Log-likelihood function evaluated at the estimated parameter array x.
aic
(Output)- Value of Akaike Information Criterion evaluated at the estimated parameter array x.
var
(Output)- Array of size (
p
+q
+1)x(p
+q
+1) containing the variance-covariance matrix.
Description¶
The Generalized Autoregressive Conditional Heteroskedastic (GARCH) model for a time series \(\{w_t\}\) is defined as
where \(z_t\)’s are independent and identically distributed standard normal random variables,
The above model is denoted as GARCH(p,q). The \(\beta_i\) and \(\alpha_i\) coefficients will be referred to as GARCH and ARCH coefficients, respectively. When \(\beta_i=0\), \(i=1,2,\ldots,p\), the above model reduces to ARCH(q) which was proposed by Engle (1982). The nonnegativity conditions on the parameters imply a nonnegative variance and the condition on the sum of the \(\beta_i\)’s and \(\alpha_i\)’s is required for wide sense stationarity.
In the empirical analysis of observed data, GARCH(1,1) or GARCH(1,2) models have often found to appropriately account for conditional heteroskedasticity (Palm 1996). This finding is similar to linear time series analysis based on ARMA models.
It is important to notice that for the above models positive and negative past values have a symmetric impact on the conditional variance. In practice, many series may have strong asymmetric influence on the conditional variance. To take into account this phenomena, Nelson (1991) put forward Exponential GARCH (EGARCH). Lai (1998) proposed and studied some properties of a general class of models that extended linear relationship of the conditional variance in ARCH and GARCH into nonlinear fashion
The maximum likelihood method is used in estimating the parameters in GARCH(p,q). The log‑likelihood of the model for the observed series \(\{w_t\}\) with length m is
Thus \(\log(L)\) is maximized subject to the constraints on the \(\alpha_i\), \(\beta_i\), and σ.
In this model, if \(q=0\), the GARCH model is singular since the estimated Hessian matrix is singular.
The initial values of the parameter vector x
entered in vector xguess
must satisfy certain constraints. The first element of xguess
refers to
\(\sigma^2\) and must be greater than zero and less than maxSigma
.
The remaining p+
q initial values must each be greater than or
equal to zero and sum to a value less than one.
To guarantee stationarity in model fitting,
is checked internally. The initial values should selected from values between zero and one.
AIC
is computed by
where log(L) is the value of the log-likelihood function.
Statistical inferences can be performed outside the function garch
based
on the output of the log-likelihood function (a)
, the Akaike Information
Criterion (aic
), and the variance-covariance matrix (var)
.
Example¶
The data for this example are generated to follow a GARCH(p, q) process
by using a random number generation function sgarch
. The data set is
analyzed and estimates of sigma, the ARCH parameters, and the GARCH
parameters are returned. The values of the Log-likelihood function and the
Akaike Information Criterion are returned from the optional arguments a
and aic
.
from __future__ import print_function
from numpy import *
from pyimsl.stat.garch import garch
from pyimsl.stat.randomNormal import randomNormal
from pyimsl.stat.randomSeedSet import randomSeedSet
def sgarch(p, q, m, x, y, z, y0, sigma):
z = randomNormal(m + 1000)
l = max(p, q)
l = max(l, 1)
for i in range(0, l):
y0[i] = z[i] * x[0]
# Compute the initial value of sigma
s3 = 0.0
if max(p, q) >= 1:
for i in range(1, p + q + 1):
s3 += x[i]
for i in range(0, l):
sigma[i] = x[0] / (1.0 - s3)
for i in range(1, m + 1000):
s1 = 0.0
s2 = 0.0
if q >= 1:
for j in range(0, q):
s1 += x[j + 1] * y0[i - j - 1] * y0[i - j - 1]
if p >= 1:
for j in range(0, p):
s2 += x[q + 1 + j] * sigma[i - j - 1]
sigma[i] = x[0] + s1 + s2
y0[i] = z[i] * sqrt(sigma[i])
# Discard the first 1000 simulated observations
for i in range(0, m):
y[i] = y0[1000 + i]
randomSeedSet(182198625)
m = 1000
p = 2
q = 1
n = p + q + 1
wk1 = empty(m + 1000)
wk2 = empty(m + 1000)
wk3 = empty(m + 1000)
x = empty(n)
xguess = empty(n)
y = empty(m)
a = []
aic = []
x[0] = 1.3
x[1] = .2
x[2] = .3
x[3] = .4
xguess[0] = 1.0
xguess[1] = .1
xguess[2] = .2
xguess[3] = .3
sgarch(p, q, m, x, y, wk1, wk2, wk3)
result = garch(p, q, y, xguess,
a=a,
aic=aic)
print("Sigma estimate is\t%11.4f" % result[0])
print("ARCH(1) estimate is\t%11.4f" % result[1])
print("GARCH(1) estimate is\t%11.4f" % result[2])
print("GARCH(2) estimate is\t%11.4f" % result[3])
print("\nLog-likelihood function value is\t%11.4f" % a[0])
print("Akaike Information Criterion value is\t%11.4f" % aic[0])
Output¶
Sigma estimate is 1.6916
ARCH(1) estimate is 0.2450
GARCH(1) estimate is 0.3372
GARCH(2) estimate is 0.3096
Log-likelihood function value is -2707.0724
Akaike Information Criterion value is 5422.1447