hypothesisPartial

Constructs an equivalent completely testable multivariate general linear hypothesis \(H\beta U=G\) from a partially testable hypothesis \(H_p \beta U =G_p\).

Synopsis

hypothesisPartial (regressionInfo, hp)

Required Argument

structure regressionInfo (Input)
A structure containing information about the regression fit. See function regression.
float hp[[]] (Input)
The \(H_p\) array of size nhp by nCoefficients with each row corresponding to a row in the hypothesis and containing the constants that specify a linear combination of the regression coefficients. Here, nCoefficients is the number of coefficients in the fitted regression model.

Return Value

Number of rows in the completely testable hypothesis, nh. This value is also the degrees of freedom for the hypothesis. The value nh classifies the hypothesis \(H_p\beta U=G_p\) as nontestable (nh = 0), partially testable (0 < nh < rankHp) or completely testable (0 < nh = rankHp), where rankHp is the rank of \(H_p\) (see keyword rankHp).

Optional Arguments

gp, float[] (Input)
Array of size nhp by nu containing the \(G_p\) matrix, the null hypothesis values. By default, each value of \(G_p\) is equal to 0.
u, int nu, float u[] (Input)

Argument nu is the number of linear combinations of the dependent variables to be considered. The value nu must be greater than 0 and less than or equal to nDependent.

Argument u contains the nDependent by nu U matrix for the test \(H_pBU=G_p\). This argument is not referenced by hypothesisPartial and is included only for consistency with functions hypothesisScph and hypothesisTest. A dummy array of length 1 may be substituted for this argument.

Default: nu = nDependent and u is the identity matrix.

rankHp (Output)
Rank of \(H_p\).
hMatrix, float h (Output)
The array of size nhp by nParameters containing the H matrix. Each row of h corresponds to a row in the completely testable hypothesis and contains the constants that specify an estimable linear combination of the regression coefficients.
g (Output)
The array of size nph by nDependent containing the G matrix. The elements of g contain the null hypothesis values for the completely testable hypothesis.

Description

Once a general linear model \(y=X\beta+\varepsilon\) is fitted, particular hypothesis tests are frequently of interest. If the matrix of regressors X is not full rank (as evidenced by the fact that some diagonal elements of the R matrix output from the fit are equal to zero), methods that use the results of the fitted model to compute the hypothesis sum of squares (see function hypothesisScph) require specification in the hypothesis of only linear combinations of the regression parameters that are estimable. A linear combination of regression parameters \(c^T\beta\) is estimable if there exists some vector a such that \(c^T=a^TX\), i.e., \(c^T\) is in the space spanned by the rows of X. For a further discussion of estimable functions, see Maindonald (1984, pp. 166−168) and Searle (1971, pp. 180−188). Function hypothesisPartial is only useful in the case of non-full rank regression models, i.e., when the problem of estimability arises.

Peixoto (1986) noted that the customary definition of testable hypothesis in the context of a general linear hypothesis test \(H\beta=g\) is overly restrictive. He extended the notion of a testable hypothesis (a hypothesis composed of estimable functions of the regression parameters) to include partially testable and completely testable hypothesis. A hypothesis \(H\beta =g\) is partially testable if the intersection of the row space H (denoted by \(\Re(H)\)) and the row space of \(X (\Re(X))\) is not essentially empty and is a proper subset of \(\Re(H)\), i.e., \(\{0\} \subset\Re (H)\cap\Re (X)\subset\Re (H)\). A hypothesis \(H\beta=g\) is completely testable if \(\{0\}\subset\Re (H)\cap\Re (H)\subset\Re (X)\). Peixoto also demonstrated a method for converting a partially testable hypothesis to one that is completely testable so that the usual method for obtaining sums of squares for the hypothesis from the results of the fitted model can be used. The method replaces \(H_p\) in the partially testable hypothesis \(H_p\beta=g_p\) by a matrix H whose rows are a basis for the intersection of the row space of \(H_p\) and the row space of X. A corresponding conversion of the null hypothesis values from \(g_p\) to g is also made. A sum of squares for the completely testable hypothesis can then be computed (see function hypothesisScph). The sum of squares that is computed for the hypothesis \(H\beta=g\) equals the difference in the error sums of squares from two fitted models—the restricted model with the partially testable hypothesis \(H_p\beta=g_p\) and the unrestricted model.

For the general case of the multivariate model \(Y=X\beta+\varepsilon\) with possible linear equality restrictions on the regression parameters, hypothesisPartial converts the partially testable hypothesis \(H_p\beta=g_p\) to a completely testable hypothesis \(H\beta U=G\). For the case of the linear model with linear equality restrictions, the definitions of the estimable functions, nontestable hypothesis, partially testable hypothesis, and completely testable hypothesis are similar to those previously given for the unrestricted model with the exception that \(\Re(X)\) is replaced by \(\Re(R)\) where R is the upper triangular matrix based on the linear equality restrictions. The nonzero rows of R form a basis for the rowspace of the matrix \((X^T,A^T)^T\). The rows of H form an orthonormal basis for the intersection of two subspaces—the subspace spanned by the rows of \(H_p\) and the subspace spanned by the rows of R. The algorithm used for computing the intersection of these two subspaces is based on an algorithm for computing angles between linear subspaces due to Björk and Golub (1973). (See also Golub and Van Loan 1983, pp. 429−430). The method is closely related to a canonical correlation analysis discussed by Kennedy and Gentle (1980, pp. 561−565). The algorithm is as follows:

  1. Compute a QR factorization of

    \[H_P^T\]

    with column permutations so that

    \[H_P^T = Q_1 R_1 P_1^T\]

    Here, \(P_1\) is the associated permutation matrix that is also an orthogonal matrix. Determine the rank of \(H_p\) as the number of nonzero diagonal elements of \(R_1\), for example \(n_1\). Partition \(Q_1=(Q_{11},Q_{12})\) so that \(Q_{11}\) is the first \(n_1\) column of \(Q_1\). Set rankHp = n.

  2. Compute a QR factorization of the transpose of the R matrix (input through regressionInfo) with column permutations so that

    \[R^T = Q_2 R_2 P_2^T\]

    Determine the rank of R from the number of nonzero diagonal elements of R, for example \(n_2\). Partition \(Q_2=(Q_{21},Q_{22})\) so that \(Q_{21}\) is the first \(n_2\) columns of \(Q_2\).

  3. Form

    \[A = Q_{11}^{T} Q_{21}\]
  4. Compute the singular values of A

    \[\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_{\min\left(n_1, n_2\right)}\]

    and the left singular vectors W of the singular value decomposition of A so that

    \[W^TAV = \mathit{diag}\left(\sigma_1, \ldots \sigma_{\min\left(n_1, n_2\right)}\right)\]

    If \(\sigma_1<1\), then the dimension of the intersection of the two subspaces is \(s=0\). Otherwise, assume the dimension of the intersection to be s if \(\sigma_s=1>\sigma_{s+1}\). Set \(nh=s\).

  5. Let \(W_1\) be the first s columns of W. Set \(H=(Q_1 W_1)^T\).

  6. Assume \(R_{11}\) to be a nhp by nhp matrix related to \(R_1\) as follows: If nhp < nParameters, \(R_{11}\) equals the first nhp rows of \(R_1\). Otherwise, \(R_{11}\) contains \(R_1\) in its first nParameters rows and zeros in the remaining rows. Compute a solution Z to the linear system

    \[R_{11}^TZ = P_1^TG_p\]

    If this linear system is declared inconsistent, an error message with error code equal to 2 is issued.

  7. Partition

    \[Z^T = \left(Z_1^T, Z_2^T\right)\]

    so that \(Z_1\) is the first \(n_1\) rows of Z. Set

\[G = W_1^T Z_1\]

The degrees of freedom (nh) classify the hypothesis \(H_p \beta U=G_p\) as nontestable (\(nh=0\)), partially testable (0 < nh < rankHp), or completely testable (0 < nh = rankHp).

For further details concerning the algorithm, see Sallas and Lionti (1988).

Example

A one-way analysis-of-variance model discussed by Peixoto (1986) is fitted to data. The model is

\[y_{ii} = μ + α_i + ɛ_{ii}       (i, j) = (1, 1) (2, 1) (2, 2)\]

The model is fitted using function regression. The partially testable hypothesis

\[\begin{split}H_0 : \begin{array}{l} \alpha_1 = 5 \\ \alpha_2 = 3 \\ \end{array}\end{split}\]

is converted to a completely testable hypothesis.

from __future__ import print_function
from numpy import *
from pyimsl.stat.regression import regression
from pyimsl.stat.regressorsForGlm import regressorsForGlm
from pyimsl.stat.hypothesisPartial import hypothesisPartial
from pyimsl.stat.writeMatrix import writeMatrix

n_rows = 3
n_independent = 1
n_dependent = 1
n_parameters = 3
nhp = 2
n_class = 1
n_continuous = 0
z = array([1., 2., 2.])  # 3x1 array
y = array([17.3, 24.1, 26.3])
gp = array([5, 3])
hp = array([[0., 1., 0.], [0., 0., 1.]])
h = []
g = []
rank_hp = []
x = []
info = []

nreg = regressorsForGlm(z, n_class, n_continuous, regressors=x)

coefficients = regression(x, y, nDependent=n_dependent, regressionInfo=info)

info = info[0]
nh = hypothesisPartial(info, hp,
                       gp=gp,
                       hMatrix=h,
                       g=g,
                       rankHp=rank_hp)

if (nh == 0):
    print("Nontestable hypothesis")
elif (nh < rank_hp[0]):
    print("Partially Testable Hypothesis")
else:
    print("Completely Testable Hypothesis")

writeMatrix("H Matrix", h)
writeMatrix("G", g)

Output

***
*** Warning error issued from IMSL function regression:
*** The model is not full rank.  There is not a unique least squares solution.  The rank of the matrix of regressors is 2.
***
Partially Testable Hypothesis
 
              H Matrix
          1            2            3
     0.0000       0.7071      -0.7071
 
     G
      1.414

Warning Errors

IMSLS_HYP_NOT_CONSISTENT The hypothesis is inconsistent within the computed tolerance.