Chapter 9: Multivariate Analysis > Usage Notes

Usage Notes

Cluster Analysis

Function imsls_f_cluster_k_means performs a K-means cluster analysis. Basic K-means clustering attempts to find a clustering that minimizes the within-cluster sums-of-squares. In this method of clustering the data, matrix X is grouped so that each observation (row in X) is assigned to one of a fixed number, K, of clusters. The sum of the squared difference of each observation about its assigned cluster’s mean is used as the criterion for assignment. In the basic algorithm, observations are transferred from one cluster or another when doing so decreases the within-cluster sums-of-squared differences. When no transfer occurs in a pass through the entire data set, the algorithm stops. Function imsls_f_cluster_k_means is one implementation of the basic algorithm.

The usual course of events in K-means cluster analysis is to use imsls_f_cluster_k_means to obtain the optimal clustering. The clustering is then evaluated by functions described in Chapter 1, “Basic Statistics,” and/or other chapters in this manual. Often, K-means clustering with more than one value of K is performed, and the value of K that best fits the data is used.

Clustering can be performed either on observations or variables. The discussion of the function imsls_f_cluster_k_means assumes the clustering is to be performed on the observations, which correspond to the rows of the input data matrix. If variables, rather than observations, are to be clustered, the data matrix should first be transposed. In the documentation for imsls_f_cluster_k_means, the words “observation” and “variable” are interchangeable.

Principal Components

The idea in principal components is to find a small number of linear combinations of the original variables that maximize the variance accounted for in the original data. This amounts to an eigensystem analysis of the covariance (or correlation) matrix. In addition to the eigensystem analysis, imsls_f_principal_components computes standard errors for the eigenvalues. Correlations of the original variables with the principal component scores also are computed.

Factor Analysis

Factor analysis and principal component analysis, while quite different in assumptions, often serve the same ends. Unlike principal components in which linear combinations yielding the highest possible variances are obtained, factor analysis generally obtains linear combinations of the observed variables according to a model relating the observed variable to hypothesized underlying factors, plus a random error term called the unique error or uniqueness. In factor analysis, the unique errors associated with each variable are usually assumed to be independent of the factors. Additionally, in the common factor model, the unique errors are assumed to be mutually independent. The factor analysis model is expressed in the following equation:

x  μ = Λf + e

where x is the p vector of observed values, μ is the p vector of variable means, Λ is the p × k matrix of factor loadings, f is the k vector of hypothesized underlying random factors, e is the p vector of hypothesized unique random errors, p is the number of variables in the observed variables, and k is the number of factors.

Because much of the computation in factor analysis was originally done by hand or was expensive on early computers, quick (but dirty) algorithms that made the calculations possible were developed. One result is the many factor extraction methods available today. Generally speaking, in the exploratory or model building phase of a factor analysis, a method of factor extraction that is not computationally intensive (such as principal components, principal factor, or image analysis) is used. If desired, a computationally intensive method is then used to obtain the final factors.

In exploratory factor analysis, the unrotated factor loadings obtained from the factor extraction are generally transformed (rotated) to simplify the interpretation of the factors. Rotation is possible because of the overparameterization in the factor analysis model. The method used for rotation may result in factors that are independent (orthogonal rotations) or correlated (oblique rotations). Prior information may be available (or hypothesized) in which case a Procrustes rotation could be used. When no prior information is available, an analytic rotation can be performed.

The steps generally used in a factor analysis are summarized as follows:

Steps in a Factor Analysis

Step 1

Calculate Covariance (Correlation) Matrix
IMSL routine imsls_f_covariances
(see Chapter 3, “Correlation and Covariance”)

Step 2

Initial Factor Extraction

imsls_f_factor_analysis

Step 3

Factor Rotation
using imsls_f_factor_analysis’ optional arguments

Orthogonal

Oblique

No Prior Info.  

IMSLS_ORTHOMAX_ROTATION

No Prior Info.

IMSLS_OBLIQUE_PROMAX_ROTATION

IMSLS_DIRECT_OBLIMIN_ROTATION

IMSLS_OBLIQUE_PIVOTAL_PROMAX_ROTATION

Prior Info.

IMSLS_ORTHOGONAL_PROCRUSTES_ROTATION

Prior Info.

IMSLS_OBLIQUE_PROCRUSTES_ROTATION

 

   Step 4

Factor Structure and Variance

imsls_f_factor_analysis optional argument IMSLS_FACTOR_STRUCTURE


RW_logo.jpg
Contact Support