Click or drag to resize
KolmogorovTwoSample Class
Performs a Kolmogorov-Smirnov two-sample test.
Inheritance Hierarchy
SystemObject
  Imsl.StatKolmogorovTwoSample

Namespace: Imsl.Stat
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
[SerializableAttribute]
public class KolmogorovTwoSample

The KolmogorovTwoSample type exposes the following members.

Constructors
  NameDescription
Public methodKolmogorovTwoSample
Constructs a two sample Kolmogorov-Smirnov goodness-of-fit test.
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as a hash function for a particular type.
(Inherited from Object.)
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Properties
  NameDescription
Public propertyMaximumDifference
D^{+}, the maximum difference between the theoretical and empirical CDF's.
Public propertyMinimumDifference
D^{-}, the minimum difference between the theoretical and empirical CDF's.
Public propertyNumberMissingX
Returns the number of missing values in the x sample.
Public propertyNumberMissingY
The number of missing values in the y sample.
Public propertyOneSidedPValue
Probability of the statistic exceeding D under the null hypothesis of equality and against the one-sided alternative. An exact probability is computed if the number of observation is less than or equal to 80, otherwise an approximate probability is computed.
Public propertyTestStatistic
The test statistic, D = \max(D^{+}, D^{-}).
Public propertyTwoSidedPValue
Probability of the statistic exceeding D under the null hypothesis of equality and against the two-sided alternative. This probability is twice the probability, p_1, reported by OneSidedPValue, (or 1.0 if p_1 \ge 1/2). This approximation is nearly exact when p_1 \lt 0.1.
Public propertyZ
The normalized D statistic without the continuity correction applied.
Top
Remarks

Class KolmogorovTwoSample computes Kolmogorov-Smirnov two-sample test statistics for testing that two continuous cumulative distribution functions (CDF's) are identical based upon two random samples. One- or two-sided alternatives are allowed. Exact p-values are computed for the two-sided test when nm \le 104, where n is the number of non-missing X observations and m the number of non-missing Y observation.

Let F_n(x) denote the empirical CDF in the X sample, let G_m(y) denote the empirical CDF in the Y sample and let the corresponding population distribution functions be denoted by F(x) and G(y), respectively. Then, the hypotheses tested by KolmogorovTwoSample are as follows:


            \begin{array}{ll}
            H_0:~ F(x) = G(x)   & H_1:~F(x) \ne G(x) \\
            H_0:~ F(x) \ge G(x) & H_1:~F(x) \lt G(x) \\
            H_0:~ F(x) \le G(x) & H_1:~F(x) \gt G(x)
            \end{array}
The test statistics are given as follows:

            \begin{array}{rl}
            D_{mn}     & = \max(D_{mn}^{+}, D_{mn}^{-}) \\
            D_{mn}^{+} & = \max_x(F_n(x)-G_m(x)) \\
            D_{mn}^{-} & = \max_x(G_m(x)-F_n(x))
            \end{array}
Asymptotically, the distribution of the statistic

            Z = D_{mn} \sqrt{\frac{m+n}{mn}}
converges to a distribution given by Smirnov (1939).

Exact probabilities for the two-sided test are computed when nm \le 104, according to an algorithm given by Kim and Jennrich (1973). When nm \gt 104, the very good approximations given by Kim and Jennrich are used to obtain the two-sided p-values. The one-sided probability is taken as one half the two-sided probability. This is a very good approximation when the p-value is small (say, less than 0.10) and not very good for large p-values.

See Also

Reference

Other Resources