Click or drag to resize
FactorAnalysis Class
Performs Principal Component Analysis or Factor Analysis on a covariance or correlation matrix.
Inheritance Hierarchy
SystemObject
  Imsl.StatFactorAnalysis

Namespace: Imsl.Stat
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
[SerializableAttribute]
public class FactorAnalysis

The FactorAnalysis type exposes the following members.

Constructors
  NameDescription
Public methodFactorAnalysis
Constructor for FactorAnalysis.
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetCorrelations
Returns the correlations of the principal components.
Public methodGetFactorLoadings
Returns the unrotated factor loadings.
Public methodGetHashCode
Serves as a hash function for a particular type.
(Inherited from Object.)
Public methodGetParameterUpdates
Returns the parameter updates.
Public methodGetPercents
Returns the cumulative percent of the total variance explained by each principal component.
Public methodGetStandardErrors
Returns the estimated asymptotic standard errors of the eigenvalues.
Public methodGetStatistics
Returns statistics.
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Public methodGetValues
Returns the eigenvalues.
Public methodGetVariances
Returns the unique variances.
Public methodGetVectors
Returns the eigenvectors.
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodSetVariances
Sets the unique variances.
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Properties
  NameDescription
Public propertyConvergenceCriterion1
The convergence criterion used to terminate the iterations.
Public propertyConvergenceCriterion2
The convergence criterion used to switch to exact second derivatives.
Public propertyDegreesOfFreedom
The number of degrees of freedom.
Public propertyFactorLoadingEstimationMethod
The factor loading estimation method.
Public propertyMaxIterations
The maximum number of iterations in the iterative procedure.
Public propertyMaxStep
The maximum number of step halvings allowed during an iteration.
Public propertyNumberOfProcessors
Perform the parallel calculations with the maximum possible number of processors set to NumberOfProcessors.
Public propertyVarianceEstimationMethod
The variance estimation method.
Top
Remarks

Class FactorAnalysis computes principal components or initial factor loading estimates for a variance-covariance or correlation matrix using exploratory factor analysis models.

Models available are the principal component model for factor analysis and the common factor model with additions to the common factor model in alpha factor analysis and image analysis. Methods of estimation include principal components, principal factor, image analysis, unweighted least squares, generalized least squares, and maximum likelihood.

For the principal component model there are methods to compute the characteristic roots, characteristic vectors, standard errors for the characteristic roots, and the correlations of the principal component scores with the original variables. Principal components obtained from correlation matrices are the same as principal components obtained from standardized (to unit variance) variables.

The principal component scores are the elements of the vector y = \Gamma^Tx where \Gamma is the matrix whose columns are the characteristic vectors (eigenvectors) of the sample covariance (or correlation) matrix and x is the vector of observed (or standardized) random variables. The variances of the principal component scores are the characteristic roots (eigenvalues) of the covariance (correlation) matrix.

Asymptotic variances for the characteristic roots were first obtained by Girshick (1939) and are given more recently by Kendall, Stuart, and Ord (1983, page 331). These variances are computed either for variance-covariance matrices or for correlation matrices.

The correlations of the principal components with the observed (or standardized) variables are the same as the unrotated factor loadings obtained for the principal components model for factor analysis when a correlation matrix is input.

In the factor analysis model used for factor extraction, the basic model is given as \Sigma = \Lambda\Lambda^T + \Psi where \Sigma is the  p \times p population covariance matrix. \Lambda is the  p \times k matrix of factor loadings relating the factors f to the observed variables x, and \Psi is the  p \times p matrix of covariances of the unique errors e. Here, p represents the number of variables and k is the number of factors. The relationship between the factors, the unique errors, and the observed variables is given as x = \Lambda f + e where, in addition, it is assumed that the expected values of e, f, and x are zero. (The sample means can be subtracted from x if the expected value of x is not zero.) It is also assumed that each factor has unit variance, the factors are independent of each other, and that the factors and the unique errors are mutually independent. In the common factor model, the elements of the vector of unique errors e are also assumed to be independent of one another so that the matrix \Psi is diagonal. This is not the case in the principal component model in which the errors may be correlated.

Further differences between the various methods concern the criterion that is optimized and the amount of computer effort required to obtain estimates. Generally speaking, the least-squares and maximum likelihood methods, which use iterative algorithms, require the most computer time with the principal factor, principal component, and the image methods requiring much less time since the algorithms in these methods are not iterative. The algorithm in alpha factor analysis is also iterative, but the estimates in this method generally require somewhat less computer effort than the least-squares and maximum likelihood estimates. In all algorithms one eigensystem analysis is required on each iteration.

See Also