Click or drag to resize
ANOVAGetConfidenceInterval Method
Computes the confidence interval associated with the difference of means between two groups using a specified method.

Namespace: Imsl.Stat
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
public virtual double[] GetConfidenceInterval(
	double conLevel,
	int i,
	int j,
	ANOVAComputeOption compMethod
)

Parameters

conLevel
Type: SystemDouble
A double specifying the confidence level for simultaneous interval estimation. If the Tukey method for computing the confidence intervals on the pairwise difference of means is to be used, conLevel must be in the range [90.0, 99.0]. Otherwise, conLevel must be in the range
[0.0, 100.0). One normally sets this value to 95.0.
i
Type: SystemInt32
An int indicating the i-th member of the pair difference, \mu_i-\mu_j. i must be a valid group index.
j
Type: SystemInt32
An int indicating the j-th member of the pair difference, \mu_i-\mu_j. j must be a valid group index.
compMethod
Type: Imsl.StatANOVAComputeOption
An ANOVA.ComputeOption. compMethod must be one of the following:

compMethodDescription
TukeyUses the Tukey method. This method is valid for balanced one-way designs.
TukeyKramerUses the Tukey-Kramer method. This method simplifies to the Tukey method for the balanced case.
DunnSidakUses the Dunn-Sidak method.
BonferroniUses the Bonferroni method.
ScheffeUses the Scheffe method.
OneAtATimeUses the One-at-a-Time (Fisher's LSD) method.

Return Value

Type: Double
A double array containing the group numbers, difference of means, and lower and upper confidence limits.

Array ElementDescription
0Group number for the i-th mean.
1Group number for the j-th mean.
2Difference of means (i-th mean) - (j-th mean).
3Lower confidence limit for the difference.
4Upper confidence limit for the difference.

Remarks

GetConfidenceInterval computes the simultaneous confidence interval on the pairwise comparison of means {\mu}_i
            and {\mu}_j in the one-way analysis of variance model. Any of several methods can be chosen. A good review of these methods is given by Stoline (1981). Also the methods are discussed in many elementary statistics texts, e.g., Kirk (1982, pages 114-127). Let s^2 be the estimated variance of a single observation. Let \nu be the degrees of freedom associated with s^2. Let


            \alpha=1-\frac{conLevel}{100.0}
The methods are summarized as follows:

Tukey method: The Tukey method gives the narrowest simultaneous confidence intervals for the pairwise differences of means 
            {\mu}_i-{\mu}_j in balanced \left({n_1=n_2=\ldots=
            n_k=n}\right) one-way designs. The method is exact and uses the Studentized range distribution. The formula for the difference {\mu}_i - {\mu}_j is given by

\bar y_i-\bar y_j\pm q_{1-\alpha;k,v}
            \sqrt{\frac{s^2}{n}}

where q_{1-a,k,v} is the (1-\alpha)100
            percentage point of the Studentized range distribution with parameters k and \nu. If the group sizes are unequal, the Tukey-Kramer method is used instead.

Tukey-Kramer method: The Tukey-Kramer method is an approximate extension of the Tukey method for the unbalanced case. (The method simplifies to the Tukey method for the balanced case.) The method always produces confidence intervals narrower than the Dunn-Sidak and Bonferroni methods. Hayter (1984) proved that the method is conservative, i.e., the method guarantees a confidence coverage of at least \left({1-
            \alpha}\right)100\%. Hayter's proof gave further support to earlier recommendations for its use (Stoline 1981). (Methods that are currently better are restricted to special cases and only offer improvement in severely unbalanced cases, see, e.g., Spurrier and Isham 1985). The formula for the difference {\mu}_i-{\mu}_j
            is given by the following:

\bar{y}_i-\bar{y}_j\pm{q_{1-\alpha;v,k}\sqrt{
            \frac{s^2}{2n_i}+\frac{s^2}{2n_j}}}

Dunn-Sidak method: The Dunn-Sidak method is a conservative method. The method gives wider intervals than the Tukey-Kramer method. (For large \nu and small \alpha and k, the difference is only slight.) The method is slightly better than the Bonferroni method and is based on an improved Bonferroni (multiplicative) inequality (Miller, pages 101, 254-255). The method uses the t distribution. The formula for the difference 
            {\mu}_i-{\mu}_j is given by

\bar{y}_i-\bar{y}_j\pm{t_{\frac{1}{2}+
            \frac{1}{2}\left({1-\alpha}\right)^{1/k^*};v}\sqrt{\frac{s^2}{n_i}+
            \frac{s^2 }{n_j}}}

where t_{f;\nu} is the 100f percentage point of the t distribution with \nu degrees of freedom.

Bonferroni method: The Bonferroni method is a conservative method based on the Bonferroni (additive) inequality (Miller, page 8). The method uses the t distribution. The formula for the difference {\mu}_i-{\mu}_j is given by

\bar{y}_i-\bar{y}_j\pm{t_{1-\frac{\alpha}{2k^*}
            ;v}\sqrt{\frac{s^2}{n_i}+\frac{s^2}{n_j}}}

Scheffè method: The Scheffè; method is an overly conservative method for simultaneous confidence intervals on pairwise difference of means. The method is applicable for simultaneous confidence intervals on all contrasts, i.e., all linear combinations

\sum\limits_{i=1}^k{c_i\mu_i}

where the following is true:

\sum\limits_{i = 1}^k{c_i=0}

The method can be recommended here only if a large number of confidence intervals on contrasts in addition to the pairwise differences of means are to be constructed. The method uses the F distribution. The formula for the difference {\mu}_i-{\mu}_j
            is given by

\bar{y}_i-\bar{y}_j\pm{\sqrt{\left({k-1}\right)
            F_{1-\alpha;k-1,v}\left(\frac{s^2}{n_i}+\frac{s^2}{n_j}\right)}}

where F_{1-a;\left({k-1}\right),\nu} is the \left({1-\alpha}\right)100 percentage point of the F distribution with k-1 and \nu
            degrees of freedom.

One-at-a-time t method (Fisher's LSD): The one-at-a-time t method is the method appropriate for constructing a single confidence interval. The confidence percentage input is appropriate for one interval at a time. The method has been used widely in conjunction with the overall test of the null hypothesis {\mu}_1={\mu}_2=
            \ldots={\mu}_k by the use of the F statistic. Fisher's LSD (least significant difference) test is a two-stage test that proceeds to make pairwise comparisons of means only if the overall F test is significant. Milliken and Johnson (1984, page 31) recommend LSD comparisons after a significant F only if the number of comparisons is small and the comparisons were planned prior to the analysis. If many unplanned comparisons are made, they recommend Scheffe's method. If the F test is insignificant, a few planned comparisons for differences in means can still be performed by using either Tukey, Tukey-Kramer, Dunn-Sidak or Bonferroni methods. Because the F test is insignificant, Scheffe's method will not yield any significant differences. The formula for the difference 
            {\mu}_i-{\mu}_j is given by

\bar{y}_i-\bar{y}_j\pm{t_{1-\frac{\alpha}{2};v}
            \sqrt{\frac{s^2}{n_i}+\frac{s^2}{n_j}}}
See Also