Package com.imsl.stat

Class SelectionRegression

java.lang.Object
com.imsl.stat.SelectionRegression
All Implemented Interfaces:
Serializable, Cloneable

public class SelectionRegression extends Object implements Serializable, Cloneable
Selects the best multiple linear regression models.

Class SelectionRegression finds the best subset regressions for a regression problem with three or more independent variables. Typically, the intercept is forced into all models and is not a candidate variable. In this case, a sum of squares and crossproducts matrix for the independent and dependent variables corrected for the mean is computed internally. Optionally, SelectionRegression supports user-calculated sum-of-squares and crossproducts matrices; see the description of the compute method.

"Best" is defined by using one of the following three criteria:

  • \(R^2\) (in percent) $$R^2= 100(1-\frac{{\mbox{SSE}}_p}{\mbox{SST}}) $$
  • \(R^2_a\) (adjusted \(R^2\)) $$R^2_a=100[1-(\frac{n-1}{n-p})\frac{{\mbox{ SSE}}_p}{\mbox{SST}}] $$ Note that maximizing the \(R^2_a\) is equivalent to minimizing the residual mean squared error: $$\frac{{\mbox{SSE}}_p}{(n-p)} $$
  • Mallow's \(C_p\) statistic $$C_p=\frac{{\mbox{SSE}}_p}{s^2_{\mbox{ k}}}+2p-n $$

Here, n is equal to the sum of the frequencies (or the number of rows in x if frequencies are not specified in the compute method), and \(\mbox{SST}\) is the total sum of squares. k is the number of candidate or independent variables, represented as the nCandidate argument in the SelectionRegression constructor. \({\mbox{SSE}}_p\) is the error sum of squares in a model containing p regression parameters including \(\beta_0\) (or p - 1 of the k candidate variables). Variable $$S^2_{\mbox{k}} $$ is the error mean square from the model with all k variables in the model. Hocking (1972) and Draper and Smith (1981, pp. 296-302) discuss these criteria.

Class SelectionRegression is based on the algorithm of Furnival and Wilson (1974). This algorithm finds the maximum number of good saved candidate regressions for each possible subset size. For more details, see method setMaximumGoodSaved(int). These regressions are used to identify a set of best regressions. In large problems, many regressions are not computed. They may be rejected without computation based on results for other subsets; this yields an efficient technique for considering all possible regressions.

There are cases when the user may want to input the variance-covariance matrix rather than allow it to be calculated. This can be accomplished using the appropriate compute method. Three situations in which the user may want to do this are as follows:

  1. The intercept is not in the model. A raw (uncorrected) sum of squares and crossproducts matrix for the independent and dependent variables is required. Argument nObservations must be set to 1 greater than the number of observations. Form \(A^TA\), where A = [A, Y], to compute the raw sum of squares and crossproducts matrix.
  2. An intercept is a candidate variable. A raw (uncorrected) sum of squares and crossproducts matrix for the constant regressor (= 1.0), independent, and dependent variables is required for cov . In this case, cov contains one additional row and column corresponding to the constant regressor. This row and column contain the sum of squares and crossproducts of the constant regressor with the independent and dependent variables. The remaining elements in cov are the same as in the previous case. Argument nObservations must be set to 1 greater than the number of observations.
  3. There are m variables that must be forced into the models. A sum of squares and crossproducts matrix adjusted for the m variables is required (calculated by regressing the candidate variables on the variables to be forced into the model). Argument nObservations must be set to m less than the number of observations.

Programming Notes

SelectionRegression can save considerable CPU time over explicitly computing all possible regressions. However, the function has some limitations that can cause unexpected results for users who are unaware of the limitations of the software.
  1. For \(\mbox{k}+1\gt-\log_2(\epsilon)\), where \(\epsilon\) is the largest relative spacing for double precision, some results can be incorrect. This limitation arises because the possible models indicated (the model numbers 1, 2, ..., 2k) are stored as floating-point values; for sufficiently large k, the model numbers cannot be stored exactly. On many computers, this means SelectionRegression (for \(\mbox{k}\gt{49}\)) can produce incorrect results.
  2. SelectionRegression eliminates some subsets of candidate variables by obtaining lower bounds on the error sum of squares from fitting larger models. First, the full model containing all independent variables is fit sequentially using a forward stepwise procedure in which one variable enters the model at a time, and criterion values and model numbers for all the candidate variables that can enter at each step are stored. If linearly dependent variables are removed from the full model, a "VariablesDeleted" warning is issued. In this case, some submodels that contain variables removed from the full model because of linear dependency can be overlooked if they have not already been identified during the initial forward stepwise procedure. If this warning is issued and you want the variables that were removed from the full model to be considered in smaller models, you can rerun the program with a set of linearly independent variables.
See Also:
  • Nested Class Summary

    Nested Classes
    Modifier and Type
    Class
    Description
    static class 
    No Variables can enter the model.
    class 
    Statistics contains statistics related to the regression coefficients.
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final int
    Indicates \(R^2_a\) (adjusted \(R^2\)) criterion regression.
    static final int
    Indicates Mallow's \(C_p\) criterion regression.
    static final int
    Indicates \(R^2\) criterion regression.
  • Constructor Summary

    Constructors
    Constructor
    Description
    SelectionRegression(int nCandidate)
    Constructs a new SelectionRegression object.
  • Method Summary

    Modifier and Type
    Method
    Description
    void
    compute(double[][] x, double[] y)
    Computes the best multiple linear regression models.
    void
    compute(double[][] x, double[] y, double[] weights)
    Computes the best weighted multiple linear regression models.
    void
    compute(double[][] x, double[] y, double[] weights, double[] frequencies)
    Computes the best weighted multiple linear regression models using frequencies for each observation.
    void
    compute(double[][] cov, int nObservations)
    Computes the best multiple linear regression models using a user-supplied covariance matrix.
    int
    Returns the criterion option used to calculate the regression estimates.
    int
    Returns the number of best regression models computed.
    Returns a new Statistics object.
    void
    setCriterionOption(int criterionOption)
    Sets the Criterion to be used.
    void
    setMaximumBestFound(int maxFound)
    Sets the maximum number of best regressions to be found.
    void
    setMaximumGoodSaved(int maxSaved)
    Sets the maximum number of good regressions for each subset size saved.
    void
    setMaximumSubsetSize(int maxSubset)
    Sets the maximum subset size if \(R^2\) criterion is used.

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait