hypothesisPartial

Constructs an equivalent completely testable multivariate general linear hypothesis HβU=G from a partially testable hypothesis HpβU=Gp.

Synopsis

hypothesisPartial (regressionInfo, hp)

Required Argument

structure regressionInfo (Input)
A structure containing information about the regression fit. See function regression.
float hp[[]] (Input)
The Hp array of size nhp by nCoefficients with each row corresponding to a row in the hypothesis and containing the constants that specify a linear combination of the regression coefficients. Here, nCoefficients is the number of coefficients in the fitted regression model.

Return Value

Number of rows in the completely testable hypothesis, nh. This value is also the degrees of freedom for the hypothesis. The value nh classifies the hypothesis HpβU=Gp as nontestable (nh = 0), partially testable (0 < nh < rankHp) or completely testable (0 < nh = rankHp), where rankHp is the rank of Hp (see keyword rankHp).

Optional Arguments

gp, float[] (Input)
Array of size nhp by nu containing the Gp matrix, the null hypothesis values. By default, each value of Gp is equal to 0.
u, int nu, float u[] (Input)

Argument nu is the number of linear combinations of the dependent variables to be considered. The value nu must be greater than 0 and less than or equal to nDependent.

Argument u contains the nDependent by nu U matrix for the test HpBU=Gp. This argument is not referenced by hypothesisPartial and is included only for consistency with functions hypothesisScph and hypothesisTest. A dummy array of length 1 may be substituted for this argument.

Default: nu = nDependent and u is the identity matrix.

rankHp (Output)
Rank of Hp.
hMatrix, float h (Output)
The array of size nhp by nParameters containing the H matrix. Each row of h corresponds to a row in the completely testable hypothesis and contains the constants that specify an estimable linear combination of the regression coefficients.
g (Output)
The array of size nph by nDependent containing the G matrix. The elements of g contain the null hypothesis values for the completely testable hypothesis.

Description

Once a general linear model y=Xβ+ε is fitted, particular hypothesis tests are frequently of interest. If the matrix of regressors X is not full rank (as evidenced by the fact that some diagonal elements of the R matrix output from the fit are equal to zero), methods that use the results of the fitted model to compute the hypothesis sum of squares (see function hypothesisScph) require specification in the hypothesis of only linear combinations of the regression parameters that are estimable. A linear combination of regression parameters cTβ is estimable if there exists some vector a such that cT=aTX, i.e., cT is in the space spanned by the rows of X. For a further discussion of estimable functions, see Maindonald (1984, pp. 166−168) and Searle (1971, pp. 180−188). Function hypothesisPartial is only useful in the case of non-full rank regression models, i.e., when the problem of estimability arises.

Peixoto (1986) noted that the customary definition of testable hypothesis in the context of a general linear hypothesis test Hβ=g is overly restrictive. He extended the notion of a testable hypothesis (a hypothesis composed of estimable functions of the regression parameters) to include partially testable and completely testable hypothesis. A hypothesis Hβ=g is partially testable if the intersection of the row space H (denoted by (H)) and the row space of X((X)) is not essentially empty and is a proper subset of (H), i.e., {0}(H)(X)(H). A hypothesis Hβ=g is completely testable if {0}(H)(H)(X). Peixoto also demonstrated a method for converting a partially testable hypothesis to one that is completely testable so that the usual method for obtaining sums of squares for the hypothesis from the results of the fitted model can be used. The method replaces Hp in the partially testable hypothesis Hpβ=gp by a matrix H whose rows are a basis for the intersection of the row space of Hp and the row space of X. A corresponding conversion of the null hypothesis values from gp to g is also made. A sum of squares for the completely testable hypothesis can then be computed (see function hypothesisScph). The sum of squares that is computed for the hypothesis Hβ=g equals the difference in the error sums of squares from two fitted models—the restricted model with the partially testable hypothesis Hpβ=gp and the unrestricted model.

For the general case of the multivariate model Y=Xβ+ε with possible linear equality restrictions on the regression parameters, hypothesisPartial converts the partially testable hypothesis Hpβ=gp to a completely testable hypothesis HβU=G. For the case of the linear model with linear equality restrictions, the definitions of the estimable functions, nontestable hypothesis, partially testable hypothesis, and completely testable hypothesis are similar to those previously given for the unrestricted model with the exception that (X) is replaced by (R) where R is the upper triangular matrix based on the linear equality restrictions. The nonzero rows of R form a basis for the rowspace of the matrix (XT,AT)T. The rows of H form an orthonormal basis for the intersection of two subspaces—the subspace spanned by the rows of Hp and the subspace spanned by the rows of R. The algorithm used for computing the intersection of these two subspaces is based on an algorithm for computing angles between linear subspaces due to Björk and Golub (1973). (See also Golub and Van Loan 1983, pp. 429−430). The method is closely related to a canonical correlation analysis discussed by Kennedy and Gentle (1980, pp. 561−565). The algorithm is as follows:

  1. Compute a QR factorization of

    HTP

    with column permutations so that

    HTP=Q1R1PT1

    Here, P1 is the associated permutation matrix that is also an orthogonal matrix. Determine the rank of Hp as the number of nonzero diagonal elements of R1, for example n1. Partition Q1=(Q11,Q12) so that Q11 is the first n1 column of Q1. Set rankHp = n.

  2. Compute a QR factorization of the transpose of the R matrix (input through regressionInfo) with column permutations so that

    RT=Q2R2PT2

    Determine the rank of R from the number of nonzero diagonal elements of R, for example n2. Partition Q2=(Q21,Q22) so that Q21 is the first n2 columns of Q2.

  3. Form

    A=QT11Q21
  4. Compute the singular values of A

    σ1σ2σmin

    and the left singular vectors W of the singular value decomposition of A so that

    W^TAV = \mathit{diag}\left(\sigma_1, \ldots \sigma_{\min\left(n_1, n_2\right)}\right)

    If \sigma_1<1, then the dimension of the intersection of the two subspaces is s=0. Otherwise, assume the dimension of the intersection to be s if \sigma_s=1>\sigma_{s+1}. Set nh=s.

  5. Let W_1 be the first s columns of W. Set H=(Q_1 W_1)^T.

  6. Assume R_{11} to be a nhp by nhp matrix related to R_1 as follows: If nhp < nParameters, R_{11} equals the first nhp rows of R_1. Otherwise, R_{11} contains R_1 in its first nParameters rows and zeros in the remaining rows. Compute a solution Z to the linear system

    R_{11}^TZ = P_1^TG_p

    If this linear system is declared inconsistent, an error message with error code equal to 2 is issued.

  7. Partition

    Z^T = \left(Z_1^T, Z_2^T\right)

    so that Z_1 is the first n_1 rows of Z. Set

G = W_1^T Z_1

The degrees of freedom (nh) classify the hypothesis H_p \beta U=G_p as nontestable (nh=0), partially testable (0 < nh < rankHp), or completely testable (0 < nh = rankHp).

For further details concerning the algorithm, see Sallas and Lionti (1988).

Example

A one-way analysis-of-variance model discussed by Peixoto (1986) is fitted to data. The model is

y_{ii} = μ + α_i + ɛ_{ii}       (i, j) = (1, 1) (2, 1) (2, 2)

The model is fitted using function regression. The partially testable hypothesis

\begin{split}H_0 : \begin{array}{l} \alpha_1 = 5 \\ \alpha_2 = 3 \\ \end{array}\end{split}

is converted to a completely testable hypothesis.

from __future__ import print_function
from numpy import *
from pyimsl.stat.regression import regression
from pyimsl.stat.regressorsForGlm import regressorsForGlm
from pyimsl.stat.hypothesisPartial import hypothesisPartial
from pyimsl.stat.writeMatrix import writeMatrix

n_rows = 3
n_independent = 1
n_dependent = 1
n_parameters = 3
nhp = 2
n_class = 1
n_continuous = 0
z = array([1., 2., 2.])  # 3x1 array
y = array([17.3, 24.1, 26.3])
gp = array([5, 3])
hp = array([[0., 1., 0.], [0., 0., 1.]])
h = []
g = []
rank_hp = []
x = []
info = []

nreg = regressorsForGlm(z, n_class, n_continuous, regressors=x)

coefficients = regression(x, y, nDependent=n_dependent, regressionInfo=info)

info = info[0]
nh = hypothesisPartial(info, hp,
                       gp=gp,
                       hMatrix=h,
                       g=g,
                       rankHp=rank_hp)

if (nh == 0):
    print("Nontestable hypothesis")
elif (nh < rank_hp[0]):
    print("Partially Testable Hypothesis")
else:
    print("Completely Testable Hypothesis")

writeMatrix("H Matrix", h)
writeMatrix("G", g)

Output

***
*** Warning error issued from IMSL function regression:
*** The model is not full rank.  There is not a unique least squares solution.  The rank of the matrix of regressors is 2.
***
Partially Testable Hypothesis
 
              H Matrix
          1            2            3
     0.0000       0.7071      -0.7071
 
     G
      1.414

Warning Errors

IMSLS_HYP_NOT_CONSISTENT The hypothesis is inconsistent within the computed tolerance.