hypothesisPartial¶
Constructs an equivalent completely testable multivariate general linear hypothesis HβU=G from a partially testable hypothesis HpβU=Gp.
Synopsis¶
hypothesisPartial (regressionInfo, hp)
Required Argument¶
- structure
regressionInfo
(Input) - A structure containing information about the regression fit. See function regression.
- float
hp[[]]
(Input) - The Hp array of size
nhp
by nCoefficients with each row corresponding to a row in the hypothesis and containing the constants that specify a linear combination of the regression coefficients. Here, nCoefficients is the number of coefficients in the fitted regression model.
Return Value¶
Number of rows in the completely testable hypothesis, nh
. This value is
also the degrees of freedom for the hypothesis. The value nh
classifies
the hypothesis HpβU=Gp as nontestable (nh
= 0), partially
testable (0 < nh
< rankHp
) or completely testable (0 < nh
=
rankHp
), where rankHp
is the rank of Hp (see keyword
rankHp
).
Optional Arguments¶
gp
, float[]
(Input)- Array of size
nhp
bynu
containing the Gp matrix, the null hypothesis values. By default, each value of Gp is equal to 0. u
, intnu
, floatu[]
(Input)Argument
nu
is the number of linear combinations of the dependent variables to be considered. The valuenu
must be greater than 0 and less than or equal to nDependent.Argument
u
contains the nDependent bynu
U matrix for the test HpBU=Gp. This argument is not referenced byhypothesisPartial
and is included only for consistency with functionshypothesisScph
andhypothesisTest
. A dummy array of length 1 may be substituted for this argument.Default:
nu
= nDependent andu
is the identity matrix.rankHp
(Output)- Rank of Hp.
hMatrix
, floath
(Output)- The array of size
nhp
by nParameters containing the H matrix. Each row of h corresponds to a row in the completely testable hypothesis and contains the constants that specify an estimable linear combination of the regression coefficients. g
(Output)- The array of size
nph
bynDependent
containing the G matrix. The elements ofg
contain the null hypothesis values for the completely testable hypothesis.
Description¶
Once a general linear model y=Xβ+ε is fitted,
particular hypothesis tests are frequently of interest. If the matrix of
regressors X is not full rank (as evidenced by the fact that some diagonal
elements of the R matrix output from the fit are equal to zero), methods
that use the results of the fitted model to compute the hypothesis sum of
squares (see function hypothesisScph) require
specification in the hypothesis of only linear combinations of the regression
parameters that are estimable. A linear combination of regression parameters
cTβ is estimable if there exists some vector a such that
cT=aTX, i.e., cT is in the space spanned by the rows of
X. For a further discussion of estimable functions, see Maindonald (1984,
pp. 166−168) and Searle (1971, pp. 180−188). Function hypothesisPartial
is only useful in the case of non-full rank regression models, i.e., when the
problem of estimability arises.
Peixoto (1986) noted that the customary definition of testable hypothesis in the context of a general linear hypothesis test Hβ=g is overly restrictive. He extended the notion of a testable hypothesis (a hypothesis composed of estimable functions of the regression parameters) to include partially testable and completely testable hypothesis. A hypothesis Hβ=g is partially testable if the intersection of the row space H (denoted by ℜ(H)) and the row space of X(ℜ(X)) is not essentially empty and is a proper subset of ℜ(H), i.e., {0}⊂ℜ(H)∩ℜ(X)⊂ℜ(H). A hypothesis Hβ=g is completely testable if {0}⊂ℜ(H)∩ℜ(H)⊂ℜ(X). Peixoto also demonstrated a method for converting a partially testable hypothesis to one that is completely testable so that the usual method for obtaining sums of squares for the hypothesis from the results of the fitted model can be used. The method replaces Hp in the partially testable hypothesis Hpβ=gp by a matrix H whose rows are a basis for the intersection of the row space of Hp and the row space of X. A corresponding conversion of the null hypothesis values from gp to g is also made. A sum of squares for the completely testable hypothesis can then be computed (see function hypothesisScph). The sum of squares that is computed for the hypothesis Hβ=g equals the difference in the error sums of squares from two fitted models—the restricted model with the partially testable hypothesis Hpβ=gp and the unrestricted model.
For the general case of the multivariate model Y=Xβ+ε
with possible linear equality restrictions on the regression parameters,
hypothesisPartial
converts the partially testable hypothesis
Hpβ=gp to a completely testable hypothesis HβU=G.
For the case of the linear model with linear equality restrictions, the
definitions of the estimable functions, nontestable hypothesis, partially
testable hypothesis, and completely testable hypothesis are similar to those
previously given for the unrestricted model with the exception that
ℜ(X) is replaced by ℜ(R) where R is the upper
triangular matrix based on the linear equality restrictions. The nonzero rows
of R form a basis for the rowspace of the matrix (XT,AT)T. The
rows of H form an orthonormal basis for the intersection of two
subspaces—the subspace spanned by the rows of Hp and the subspace
spanned by the rows of R. The algorithm used for computing the intersection
of these two subspaces is based on an algorithm for computing angles between
linear subspaces due to Björk and Golub (1973). (See also Golub and Van Loan
1983, pp. 429−430). The method is closely related to a canonical correlation
analysis discussed by Kennedy and Gentle (1980, pp. 561−565). The algorithm
is as follows:
Compute a QR factorization of
HTPwith column permutations so that
HTP=Q1R1PT1Here, P1 is the associated permutation matrix that is also an orthogonal matrix. Determine the rank of Hp as the number of nonzero diagonal elements of R1, for example n1. Partition Q1=(Q11,Q12) so that Q11 is the first n1 column of Q1. Set
rankHp
= n.Compute a QR factorization of the transpose of the R matrix (input through
regressionInfo
) with column permutations so thatRT=Q2R2PT2Determine the rank of R from the number of nonzero diagonal elements of R, for example n2. Partition Q2=(Q21,Q22) so that Q21 is the first n2 columns of Q2.
Form
A=QT11Q21Compute the singular values of A
σ1≥σ2≥…≥σminand the left singular vectors W of the singular value decomposition of A so that
W^TAV = \mathit{diag}\left(\sigma_1, \ldots \sigma_{\min\left(n_1, n_2\right)}\right)If \sigma_1<1, then the dimension of the intersection of the two subspaces is s=0. Otherwise, assume the dimension of the intersection to be s if \sigma_s=1>\sigma_{s+1}. Set nh=s.
Let W_1 be the first s columns of W. Set H=(Q_1 W_1)^T.
Assume R_{11} to be a
nhp
bynhp
matrix related to R_1 as follows: Ifnhp
< nParameters, R_{11} equals the firstnhp
rows of R_1. Otherwise, R_{11} contains R_1 in its first nParameters rows and zeros in the remaining rows. Compute a solution Z to the linear systemR_{11}^TZ = P_1^TG_pIf this linear system is declared inconsistent, an error message with error code equal to 2 is issued.
Partition
Z^T = \left(Z_1^T, Z_2^T\right)so that Z_1 is the first n_1 rows of Z. Set
The degrees of freedom (nh) classify the hypothesis H_p \beta U=G_p
as nontestable (nh=0), partially testable (0 < nh < rankHp
), or
completely testable (0 < nh = rankHp
).
For further details concerning the algorithm, see Sallas and Lionti (1988).
Example¶
A one-way analysis-of-variance model discussed by Peixoto (1986) is fitted to data. The model is
The model is fitted using function regression. The partially testable hypothesis
is converted to a completely testable hypothesis.
from __future__ import print_function
from numpy import *
from pyimsl.stat.regression import regression
from pyimsl.stat.regressorsForGlm import regressorsForGlm
from pyimsl.stat.hypothesisPartial import hypothesisPartial
from pyimsl.stat.writeMatrix import writeMatrix
n_rows = 3
n_independent = 1
n_dependent = 1
n_parameters = 3
nhp = 2
n_class = 1
n_continuous = 0
z = array([1., 2., 2.]) # 3x1 array
y = array([17.3, 24.1, 26.3])
gp = array([5, 3])
hp = array([[0., 1., 0.], [0., 0., 1.]])
h = []
g = []
rank_hp = []
x = []
info = []
nreg = regressorsForGlm(z, n_class, n_continuous, regressors=x)
coefficients = regression(x, y, nDependent=n_dependent, regressionInfo=info)
info = info[0]
nh = hypothesisPartial(info, hp,
gp=gp,
hMatrix=h,
g=g,
rankHp=rank_hp)
if (nh == 0):
print("Nontestable hypothesis")
elif (nh < rank_hp[0]):
print("Partially Testable Hypothesis")
else:
print("Completely Testable Hypothesis")
writeMatrix("H Matrix", h)
writeMatrix("G", g)
Output¶
***
*** Warning error issued from IMSL function regression:
*** The model is not full rank. There is not a unique least squares solution. The rank of the matrix of regressors is 2.
***
Partially Testable Hypothesis
H Matrix
1 2 3
0.0000 0.7071 -0.7071
G
1.414
Warning Errors¶
IMSLS_HYP_NOT_CONSISTENT |
The hypothesis is inconsistent within the computed tolerance. |