hypothesisScph

Computes the matrix of sums of squares and crossproducts for the multivariate general linear hypothesis \(H\beta U=G\) given the regression fit.

Synopsis

hypothesisScph (regressionInfo, h, dfh)

Required Argument

structure regressionInfo (Input)
A structure containing information about the regression fit. See function regression.
float h[] (Input)
The H array of size nh by nCoefficients with each row corresponding to a row in the hypothesis and containing the constants that specify a linear combination of the regression coefficients. Here, nCoefficients is the number of coefficients in the fitted regression model.
float dfh (Output)
Degrees of freedom for the sums of squares and crossproducts matrix. This is equal to the rank of input matrix h.

Return Value

Array of size nu by nu containing the sums of squares and crossproducts attributable to the hypothesis.

Optional Arguments

g, float[[]] (Input)
Array of size nh by nu containing the G matrix, the null hypothesis values. By default, each value of G is equal to 0.
u, float[] (Input)

Argument u contains the nDependent by nu U matrix for the test \(H_p\beta U=G_p\); where nu is the number of linear combinations of the dependent variables to be considered. The value nu must be greater than 0 and less than or equal to nDependent.

Default: nu = nDependent and u is the identity matrix

Description

Function hypothesisScph computes the matrix of sums of squares and crossproducts for the general linear hypothesis \(H\beta U=G\) for the multivariate general linear model \(Y=X\beta+\varepsilon\).

The rows of H must be linear combinations of the rows of R, i.e., \(H\beta=G\) must be completely testable. If the hypothesis is not completely testable, function hypothesisPartial can be used to construct an equivalent completely testable hypothesis.

Computations are based on an algorithm discussed by Kennedy and Gentle (1980, p. 317) that is extended by Sallas and Lionti (1988) for mulitvariate non-full rank models with possible linear equality restrictions. The algorithm is as follows:

  1. Form \(W=H \hat{\beta} U-G\).

  2. Find C as the solution of \(R^TC=H^T\). If the equations are declared inconsistent within a computed tolerance, a warning error message is issued that the hypothesis is not completely testable.

  3. For all rows of R corresponding to restrictions, i.e., containing negative diagonal elements from a restricted least-squares fit, zero out the corresponding rows of C, i.e., from DC.

  4. Decompose DC using Householder transformations and column pivoting to yield a square, upper triangular matrix T with diagonal elements of nonincreasing magnitude and permutation matrix P such that

    \[\begin{split}\mathit{DCP} = Q \begin{bmatrix} T \\ 0 \end{bmatrix}\end{split}\]

    where Q is an orthogonal matrix.

  5. Determine the rank of T, say r. If \(t_{11}=0\), then \(r=0\). Otherwise, the rank of T is r if

    \[| t_{rr} | > | t_{11} | ɛ ≥ | t_{r+1,r+1} |\]

    where ɛ = 10.0 × machine(4) (10.0 × machine(4) for the double-precision version).

    Then, zero out all rows of T below r. Set the degrees of freedom for the hypothesis, dfh, to r.

  6. Find V as a solution to \(T^TV=P^TW\). If the equations are inconsistent, a warning error message is issued that the hypothesis is inconsistent within a computed tolerance, i.e., the linear system

    \[HβU = G\]
    \[Aβ = Z\]

    does not have a solution for β.

    Form \(V^TV\), which is the required matrix of sum of squares and crossproducts, scph.

    In general, the two warning errors described above are serious user errors that require the user to correct the hypothesis before any meaningful sums of squares from this function can be computed. However, in some cases, the user may know the hypothesis is consistent and completely testable, but the checks in hypothesisScph are too tight. For this reason, hypothesisScph continues with the calculations.

    Function hypothesisScph gives a matrix of sums of squares and crossproducts that could also be obtained from separate fittings of the two models:

    \[Y^1 = Xβ^1 + ɛ^1 \phantom{...} (1)\]
    \[Aβ^1 = Z^1\]
    \[Hβ^1 = G\]

    and

    \[Y^1 = Xβ^1 + ɛ^1 \phantom{...} (2)\]
    \[Aβ^1 = Z^1\]

    where \(Y^1=YU\), \(\beta^1=\beta U\), \(\varepsilon^1=\varepsilon U\), and \(Z^1 =ZU\). The error sum of squares and crossproducts matrix for (1) minus that for (2) is the matrix sum of squares and crossproducts output in scph. Note that this approach avoids the question of testability.

Example

The data for this example are from Maindonald (1984, pp. 203−204). A multivariate regression model containing two dependent variables and three independent variables is fit using function regression and the results stored in the structure info. The sum of squares and crossproducts matrix, scph, is then computed by calling hypothesisScph for the test that the third independent variable is in the model (determined by the specification of h). The degrees of freedom for scph also is computed.

from __future__ import print_function
from numpy import *
from pyimsl.stat.regression import regression
from pyimsl.stat.hypothesisScph import hypothesisScph
from pyimsl.stat.writeMatrix import writeMatrix


x = array([
    [7.0, 5.0, 6.0],
    [2.0, -1.0, 6.0],
    [7.0, 3.0, 5.0],
    [-3.0, 1.0, 4.0],
    [2.0, -1.0, 0.0],
    [2.0, 1.0, 7.0],
    [-3.0, -1.0, 3.0],
    [2.0, 1.0, 1.0],
    [2.0, 1.0, 4.0]])
y = array([
    [7.0, 1.0],
    [-5.0, 4.0],
    [6.0, 10.0],
    [5.0, 5.0],
    [5.0, -2.0],
    [-2.0, 4.0],
    [0.0, -6.0],
    [8.0, 2.0],
    [3.0, 0.0]])
n_independent = 3
n_dependent = 2
h = array([[0, 0, 0, 1]])  # 1X4 array
info = []
dfh = []

coefficients = regression(x, y,
                          nDependent=n_dependent,
                          regressionInfo=info)

scph = hypothesisScph(info[0], h, dfh)

print("Degrees of Freedom Hypothesis = %4.0f" % (dfh[0]))

writeMatrix("Sum of Squares and Crossproducts", scph,
            noColLabels=True, noRowLabels=True)

Output

Degrees of Freedom Hypothesis =    1
 
Sum of Squares and Crossproducts
            100          -40
            -40           16

Warning Errors

IMSLS_HYP_NOT_TESTABLE The hypothesis is not completely testable within the computed tolerance. Each row of “h” must be a linear combination of the rows of “r”.
IMSLS_HYP_NOT_CONSISTENT The hypothesis is inconsistent within the computed tolerance.