contingencyTable

../../_images/OpenMp_27.png

Performs a chi-squared analysis of a two-way contingency table.

Synopsis

contingencyTable (table)

Required Arguments

float table[[]] (Input)
Array of length nRows × nColumns containing the observed counts in the contingency table.

Return Value

Pearson chi-squared p-value for independence of rows and columns.

Optional Arguments

chiSquared, df, chiSquared, pValue (Output)
Argument df is the degrees of freedom for the chi-squared tests associated with the table, chiSquared is the Pearson chi-squared test statistic, and argument pValue is the probability of a larger Pearson chi-squared.
lrt, df, gSquared, pValue (Output)
Argument df is the degrees of freedom for the chi-squared tests associated with the table, argument gSquared is the likelihood ratio \(G^2\) (chi-squared), and argument pValue is the probability of a larger \(G^2\).
expected (Output)
The array of size (nRows + 1) × (nColumns + 1) containing the expected values of each cell in the table, under the null hypothesis, in the first nRows rows and nColumns columns. The marginal totals are in the last row and column.
contributions (Output)
An array of size (nRows + 1) × (nColumns + 1) containing the contributions to chi-squared for each cell in the table in the first nRows rows and nColumns columns. The last row and column contain the total contribution to chi-squared for that row or column.
chiSquaredStats (Output)

An array of length 5 containing chi-squared statistics associated with this contingency table. The last three elements are based on Pearson’s chi-square statistic (see chiSquared).

The chi-squared statistics are given as follows:

Element Chi-squared Statistics
0 exact mean
1 exact standard deviation
2 Phi
3 contingency coefficient
4 Cramer’s V
statistics (Output)

An array of size 23 × 5 containing statistics associated with this table. Each row corresponds to a statistic.

Row Statistic
0 Gamma
1 Kendall’s \(\tau_b\)
2 Stuart’s \(\tau_c\)
3 Somers’ D for rows (given columns)
4 Somers’ D for columns (given rows)
5 product moment correlation
6 Spearman rank correlation
7 Goodman and Kruskal τ for rows (given columns)
8 Goodman and Kruskal τ for columns (given rows)
9 uncertainty coefficient U (symmetric)
10 uncertainty \(U_{r|c}\) (rows)
11 uncertainty \(U_{c|r}\) (columns)
12 optimal prediction λ (symmetric)
13 optimal prediction \(\lambda_{r|c}\) (rows)
14 optimal prediction \(\lambda_{c|r}\) (columns)
15 optimal prediction \(\lambda_{r|c}\) (rows)
16 optimal prediction \(\lambda_{c|r}\) (columns)
17

test for linear trend in row probabilities if nRows = 2

If nRows is not 2, a test for linear trend in column probabilities if nColumns = 2.

18 Kruskal-Wallis test for no row effect
19 Kruskal-Wallis test for no column effect
20 kappa (square tables only)
21 McNemar test of symmetry (square tables only)
22 McNemar one degree of freedom test of symmetry (square tables only)

If a statistic cannot be computed, or if some value is not relevant for the computed statistic, the entry is NaN (Not a Number). The columns are as follows:

Column Value
0 estimated statistic
1 standard error for any parameter value
2 standard error under the null hypothesis
3 t value for testing the null hypothesis
4 p-value of the test in column 3

In the McNemar tests, column 0 contains the statistic, column 1 contains the chi-squared degrees of freedom, column 3 contains the exact p-value (1 degree of freedom only), and column 4 contains the chi-squared asymptotic p-value. The Kruskal-Wallis test is the same except no exact p-value is computed.

Description

Function contingencyTable computes statistics associated with an r × c (nRows × nColumns) contingency table. The function computes the chi-squared test of independence, expected values, contributions to chi-squared, row and column marginal totals, some measures of association, correlation, prediction, uncertainty, the McNemar test for symmetry, a test for linear trend, the odds and the log odds ratio, and the kappa statistic (if the appropriate optional arguments are selected).

Notation

Let \(x_{ij}\) denote the observed cell frequency in the ij cell of the table and n denote the total count in the table. Let \(p_{ij}\) = \(p_{i\bullet} p_{\bullet j}\) denote the predicted cell probabilities under the null hypothesis of independence, where \(p_{i\bullet}\) and \(p_{\bullet j}\) are the row and column marginal relative frequencies. Next, compute the expected cell counts as \(e_{ij}=np_{ij}\).

Also required in the following are \(a_{uv}\) and \(b_{uv}\) for \(u,v=1,\ldots,n\). Let \((r_s,c_s)\) denote the row and column response of observation s. Then, \(a_{uv}\) = 1, 0, or −1, depending on whether \(r_u<r_v\), \(r_u=r_v\), or \(r_u>r_v\), respectively. The \(b_{uv}\) are similarly defined in terms of the \(c_s\) variables.

Chi-squared Statistic

For each cell in the table, the contribution to \(X^2\) is given as \((x_{ij}-e_{ij})^2/e_{ij}\). The Pearson chi-squared statistic (denoted \(X^2\)) is computed as the sum of the cell contributions to chi-squared. It has \((r-1) (c-1)\) degrees of freedom and tests the null hypothesis of independence, i.e., \(H_0 : p_{ij}=p_{i\bullet} p_{\bullet j}\). The null hypothesis is rejected if the computed value of \(X^2\) is too large.

The maximum likelihood equivalent of \(X^2\), \(G^2\) is computed as follows:

\[G^2 = -2 \sum_{i,j} x_{ij} \ln \left(x_{ij}/np_{ij}\right)\]

\(G^2\) is asymptotically equivalent to \(X^2\) and tests the same hypothesis with the same degrees of freedom.

Standard Errors and p-values for Some Measures of Association

In Columns 1 through 4 of statistics, estimated standard errors and asymptotic p-values are reported. Estimates of the standard errors are computed in two ways. The first estimate, in Column 1 of the array statistics, is asymptotically valid for any value of the statistic. The second estimate, in Column 2 of the array, is only correct under the null hypothesis of no association. The z-scores in Column 3 of statistics are computed using this second estimate of the standard errors. The p-values in Column 4 are computed from this z-score. See Brown and Benedetti (1977) for a discussion and formulas for the standard errors in Column 2.

Measures of Association for Ranked Rows and Columns

The measures of association, ɸ, P, and V, do not require any ordering of the row and column categories. Function contingencyTable also computes several measures of association for tables in which the rows and column categories correspond to ranked observations. Two of these tests, the product-moment correlation and the Spearman correlation, are correlation coefficients computed using assigned scores for the row and column categories. The cell indices are used for the product-moment correlation, while the average of the tied ranks of the row and column marginals is used for the Spearman rank correlation. Other scores are possible.

Gamma, Kendall’s \(\tau_b\), Stuart’s \(\tau_c\), and Somers’ D are measures of association that are computed like a correlation coefficient in the numerator. In all these measures, the numerator is computed as the “covariance” between the \(a_{uv}\) variables and \(b_{uv}\) variables defined above, i.e., as follows:

\[\sum_u \sum_v a_{uv} b_{uv}\]

Recall that \(a_{uv}\) and \(b_{uv}\) can take values −1, 0, or 1. Since the product \(a_{uv} b_{uv}=1\) only if \(a_{uv}\) and \(b_{uv}\) are both 1 or are both −1, it is easy to show that this ‘‘covariance’’ is twice the total number of agreements minus the number of disagreements, where a disagreement occurs when \(a_{uv} b_{uv}=-1\).

Kendall’s \(\tau_b\) is computed as the correlation between the \(a_{uv}\) variables and the \(b_{uv}\) variables (see Kendall and Stuart 1979, p. 593). In a rectangular table (\(r\neq c\)), Kendall’s \(\tau_b\) cannot be 1.0 (if all marginal totals are positive). For this reason, Stuart suggested a modification to the denominator of \(\tau\) in which the denominator becomes the largest possible value of the “covariance.” This maximizing value is approximately \(n^2 m/(m-1)\), where \(m =\min(r,c)\). Stuart’s \(\tau_c\) uses this approximate value in its denominator. For large n, \(\tau_c\approx m \tau_b/(m-1)\).

Gamma can be motivated in a slightly different manner. Because the “covariance” of the \(a_{uv}\) variables and the \(b_{uv}\) variables can be thought of as twice the number of agreements minus the disagreements, \(2(A-D)\), where A is the number of agreements and D is the number of disagreements, Gamma is motivated as the probability of agreement minus the probability of disagreement, given that either agreement or disagreement occurred. This is shown as \(\gamma=(A-D)/(A+D)\).

Two definitions of Somers’ D are possible, one for rows and a second for columns. Somers’ D for rows can be thought of as the regression coefficient for predicting \(a_{uv}\) from \(b_{uv}\). Moreover, Somer’s D for rows is the probability of agreement minus the probability of disagreement, given that the column variable, \(b_{uv}\), is not 0. Somers’ D for columns is defined in a similar manner.

A discussion of all of the measures of association in this section can be found in Kendall and Stuart (1979, p. 592).

Measures of Prediction and Uncertainty

Optimal Prediction Coefficients: The measures in this section do not require any ordering of the row or column variables. They are based entirely upon probabilities. Most are discussed in Bishop et al. (1975, p. 385).

Consider predicting (or classifying) the column for a given row in the table. Under the null hypothesis of independence, choose the column with the highest column marginal probability for all rows. In this case, the probability of misclassification for any row is 1 minus this marginal probability. If independence is not assumed within each row, choose the column with the highest row conditional probability. The probability of misclassification for the row becomes 1 minus this conditional probability.

Define the optimal prediction coefficient \(\lambda_{c|r}\) for predicting columns from rows as the proportion of the probability of misclassification that is eliminated because the random variables are not independent. It is estimated by

\[\lambda_{c|r} = \frac{\left(1 - p_{\cdot m}\right) - \left(1-\sum\limits_i p_{im}\right)} {1 - p_{\cdot m}}\]

where m is the index of the maximum estimated probability in the row (\(p_{im}\)) or row margin (\(p_m\)). A similar coefficient is defined for predicting the rows from the columns. The symmetric version of the optimal prediction λ is obtained by summing the numerators and denominators of \(\lambda_{r|c}\) and \(\lambda_{c|r}\), then dividing. Standard errors for these coefficients are given in Bishop et al. (1975, p. 388).

A problem with the optimal prediction coefficients λ is that they vary with the marginal probabilities. One way to correct this is to use row conditional probabilities. The optimal prediction λ* coefficients are defined as the corresponding λ coefficients in which first the row (or column) marginals are adjusted to the same number of observations. This yields

\[\lambda_{c|r}^* = \frac{\sum\limits_i \max_j p_{j|i} - \max_j\left(\sum\limits_i p_{j|i}\right)} {R - \max_j\left(\sum\limits_i p_{j|i} p_{j|i}\right)}\]

where i indexes the rows, j indexes the columns, and \(p_{j|i}\) is the (estimated) probability of column j given row i.

\[\lambda_{r|c}^*\]

is similarly defined.

Goodman and Kruskal \(\tau\): A second kind of prediction measure attempts to explain the proportion of the explained variation of the row (column) measure given the column (row) measure. Define the total variation in the rows as follows:

\[n/2 - \left(\sum_i x_{i\cdot}^{2}\right) / (2n)\]

Note that this is \(1/(2n)\) times the sums of squares of the \(a_{uv}\) variables.

With this definition of variation, the Goodman and Kruskal \(\tau\) coefficient for rows is computed as the reduction of the total variation for rows accounted for by the columns, divided by the total variation for the rows. To compute the reduction in the total variation of the rows accounted for by the columns, note that the total variation for the rows within column j is defined as follows:

\[q_j = x_{\cdot j} / 2 - \left(\sum_i x_{ij}^2\right) / \left(2x_{i\cdot}\right)\]

The total variation for rows within columns is the sum of the \(q_j\) variables. Consistent with the usual methods in the analysis of variance, the reduction in the total variation is given as the difference between the total variation for rows and the total variation for rows within the columns.

Goodman and Kruskal’s \(\tau\) for columns is similarly defined. See Bishop et al. (1975, p. 391) for the standard errors.

Uncertainty Coefficients: The uncertainty coefficient for rows is the increase in the log-likelihood that is achieved by the most general model over the independence model, divided by the marginal log-likelihood for the rows. This is given by the following equation:

\[U_{r|c} = \frac{\sum\limits_{i,j} x_{ij} \log \left(x_{i\cdot} x_{\cdot j} / nx_{ij}\right)} {\sum\limits_i x_i\cdot \log\left(x_{i\cdot}/n\right)}\]

The uncertainty coefficient for columns is similarly defined. The symmetric uncertainty coefficient contains the same numerator as \(U_{r|c}\) and \(U_{c|r}\) but averages the denominators of these two statistics. Standard errors for U are given in Brown (1983).

Kruskal-Wallis: The Kruskal-Wallis statistic for rows is a one-way analysis-of-variance-type test that assumes the column variable is monotonically ordered. It tests the null hypothesis that no row populations are identical, using average ranks for the column variable. The Kruskal-Wallis statistic for columns is similarly defined. Conover (1980) discusses the Kruskal-Wallis test.

Test for Linear Trend: When there are two rows, it is possible to test for a linear trend in the row probabilities if it is assumed that the column variable is monotonically ordered. In this test, the probabilities for row 1 are predicted by the column index using weighted simple linear regression. This slope is given by

\[\hat{\beta} = \frac{\sum\limits_j x_{\cdot j} \left(x_{1j}/x_{\cdot j} - x_{1\cdot}/n\right) \left(j - \overline{j}\right)} {\sum\limits_j x_{\cdot j} \left(j - \overline{j}\right)^2}\]

where

\[\overline{j} = \sum_j x_{\cdot j} j/n\]

is the average column index. An asymptotic test that the slope is 0 may then be obtained (in large samples) as the usual regression test of zero slope.

In two-column data, a similar test for a linear trend in the column probabilities is computed. This test assumes that the rows are monotonically ordered.

Kappa: Kappa is a measure of agreement computed on square tables only. In the kappa statistic, the rows and columns correspond to the responses of two judges. The judges agree along the diagonal and disagree off the diagonal. Let

\[p_0 = \sum_i x_{ii} / n\]

denote the probability that the two judges agree, and let

\[p_c = \sum_i e_{ii} / n\]

denote the expected probability of agreement under the independence model. Kappa is then given by \((p_0-p_c)/(1-p_c)\).

McNemar Tests: The McNemar test is a test of symmetry in a square contingency table. In other words, it is a test of the null hypothesis \(H_0 : \theta_{ij}=\theta_{ji}\). The multiple degrees-of-freedom version of the McNemar test with \(r (r-1)/2\) degrees of freedom is computed as follows:

\[\sum_{i<j} \frac{\left(x_{ij} - x_{ji}\right)^2}{\left(x_{ij} + x_{ji}\right)}\]

The single degree-of-freedom test assumes that the differences, \(x_{ij}-x_{ji}\), are all in one direction. The single degree-of-freedom test will be more powerful than the multiple degrees-of-freedom test when this is the case. The test statistic is given as follows:

\[\frac{\left(\sum\limits_{i<j} \left(x_{ij} - x_{ji}\right)\right)^2} {\sum\limits_{i<j} \left(x_{ij} + x_{ji}\right)}\]

The exact probability can be computed by the binomial distribution.

Examples

Example 1

The following example is taken from Kendall and Stuart (1979) and involves the distance vision in the right and left eyes. Output contains only the p-value.

from __future__ import print_function
from numpy import *
from pyimsl.stat.contingencyTable import contingencyTable

n_rows = 4
n_columns = 4
table = [[821, 112, 85, 35],
         [116, 494, 145, 27],
         [72, 151, 583, 87],
         [43, 34, 106, 331]]

p_value = contingencyTable(table)

print("P-Value: ", p_value)

Output

P-Value:  0.0

Example 2

The following example, which illustrates the use of Kappa and McNemar tests, uses the same distance vision data as the previous example. The available statistics are output using optional arguments.

from __future__ import print_function
from numpy import *
from pyimsl.stat.contingencyTable import contingencyTable
from pyimsl.stat.writeMatrix import writeMatrix

n_rows = 4
n_columns = 4
cssq = {}
lrt = {}
expected = empty(0)
contributions = empty(0)
chstats = empty(0)
stats = empty(0)

table = array([[821.0, 112.0, 85.0, 35.0],
               [116.0, 494.0, 145.0, 27.0],
               [72.0, 151.0, 583.0, 87.0],
               [43.0, 34.0, 106.0, 331.0]])
labels = ["Exact mean",
          "Exact standard deviation",
          "Phi",
          "P",
          "Cramer's V"]
stat_row_labels = ["Gamma", "Tau B", "Tau C",
                   "D-Row", "D-Column", "Correlation", "Spearman",
                   "GK tau rows", "GK tau cols.", "U - sym.", "U - rows",
                   "U - cols.", "Lambda-sym.", "Lambda-row", "Lambda-col.",
                   "l-star-rows", "l-star-col.", "Lin. trend",
                   "Kruskal row", "Kruskal col.", "Kappa", "McNemar",
                   "McNemar df=1"]
stat_col_labels = ["", "statistic", "standard error",
                   "std. error under Ho", "t-value testing Ho",
                   "p-value"]

n_coef = contingencyTable(table, chiSquared=cssq, lrt=lrt, expected=expected,
                          contributions=contributions, chiSquaredStats=chstats,
                          statistics=stats)

print("Pearson chi-squared statistic     %11.4f" % cssq["chiSquared"])
print("p-value for Pearson chi-squared   %11.4f" % cssq["pValue"])
print("degrees of freedom                %11d" % cssq["df"])
print("G-squared statistic               %11.4f" % lrt["gSquared"])
print("p-value for G-squared             %11.4f" % lrt["pValue"])
print("degrees of freedom                %11d" % lrt["df"])
writeMatrix("* * * Table Values * * *", table)
writeMatrix("* * * Expected Values * * *", expected, writeFormat="%11.2f")
writeMatrix("* * * Contributions to Chi-squared* * *",
            contributions, writeFormat="%11.2f")
writeMatrix("* * * Chi-square Statistics * * *", chstats,
            rowLabels=labels, writeFormat="%11.4f", column=True)
writeMatrix("* * * Table Statistics * * *", stats, rowLabels=stat_row_labels,
            colLabels=stat_col_labels, writeFormat="%9.4f")

Output

Pearson chi-squared statistic       3304.3684
p-value for Pearson chi-squared        0.0000
degrees of freedom                          9
G-squared statistic                 2781.0190
p-value for G-squared                  0.0000
degrees of freedom                          9
 
              * * * Table Values * * *
             1            2            3            4
1          821          112           85           35
2          116          494          145           27
3           72          151          583           87
4           43           34          106          331
 
                    * * * Expected Values * * *
             1            2            3            4            5
1       341.69       256.92       298.49       155.90      1053.00
2       253.75       190.80       221.67       115.78       782.00
3       289.77       217.88       253.14       132.21       893.00
4       166.79       125.41       145.70        76.10       514.00
5      1052.00       791.00       919.00       480.00      3242.00
 
              * * * Contributions to Chi-squared* * *
             1            2            3            4            5
1       672.36        81.74       152.70        93.76      1000.56
2        74.78       481.84        26.52        68.08       651.21
3       163.66        20.53       429.85        15.46       629.50
4        91.87        66.63        10.82       853.78      1023.10
5      1002.68       650.73       619.88      1031.08      3304.37
 
  * * * Chi-square Statistics * * *
Exact mean                     9.0028
Exact standard deviation       4.2402
Phi                            1.0096
P                              0.7105
Cramer's V                     0.5829
 
                    * * * Table Statistics * * *
              statistic  standard error  std. error  t-value testing
                                           under Ho               Ho
Gamma            0.7757          0.0123      0.0149          52.1897
Tau B            0.6429          0.0122      0.0123          52.1897
Tau C            0.6293          0.0121   .........          52.1897
D-Row            0.6418          0.0122      0.0123          52.1897
D-Column         0.6439          0.0122      0.0123          52.1897
Correlation      0.6926          0.0128      0.0172          40.2669
Spearman         0.6939          0.0127      0.0127          54.6614
GK tau rows      0.3420          0.0123   .........        .........
GK tau cols.     0.3430          0.0122   .........        .........
U - sym.         0.3171          0.0110   .........        .........
U - rows         0.3178          0.0110   .........        .........
U - cols.        0.3164          0.0110   .........        .........
Lambda-sym.      0.5373          0.0124   .........        .........
Lambda-row       0.5374          0.0126   .........        .........
Lambda-col.      0.5372          0.0126   .........        .........
l-star-rows      0.5506          0.0136   .........        .........
l-star-col.      0.5636          0.0127   .........        .........
Lin. trend    .........       .........   .........        .........
Kruskal row   1561.4859          3.0000   .........        .........
Kruskal col.  1563.0303          3.0000   .........        .........
Kappa            0.5744          0.0111      0.0106          54.3583
McNemar          4.7625          6.0000   .........        .........
McNemar df=1     0.9487          1.0000   .........           0.3459
 
                p-value
Gamma            0.0000
Tau B            0.0000
Tau C            0.0000
D-Row            0.0000
D-Column         0.0000
Correlation      0.0000
Spearman         0.0000
GK tau rows   .........
GK tau cols.  .........
U - sym.      .........
U - rows      .........
U - cols.     .........
Lambda-sym.   .........
Lambda-row    .........
Lambda-col.   .........
l-star-rows   .........
l-star-col.   .........
Lin. trend    .........
Kruskal row      0.0000
Kruskal col.     0.0000
Kappa            0.0000
McNemar          0.5746
McNemar df=1     0.3301

Warning Errors

IMSLS_DF_GT_30 The degrees of freedom for “chiSquared” are greater than 30. The exact mean, standard deviation, and the normal distribution function should be used.
IMSLS_EXP_VALUES_TOO_SMALL Some expected values are less than #. Some asymptotic p-values may not be good.
IMSLS_PERCENT_EXP_VALUES_LT_5 Twenty percent of the expected values are calculated less than 5.