Click or drag to resize
KolmogorovOneSample Class
The class KolmogorovOneSample performs a Kolmogorov-Smirnov goodness-of-fit test in one sample.
Inheritance Hierarchy
SystemObject
  Imsl.StatKolmogorovOneSample

Namespace: Imsl.Stat
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
[SerializableAttribute]
public class KolmogorovOneSample

The KolmogorovOneSample type exposes the following members.

Constructors
  NameDescription
Public methodKolmogorovOneSample
Constructs a one sample Kolmogorov-Smirnov goodness-of-fit test.
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as a hash function for a particular type.
(Inherited from Object.)
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Properties
  NameDescription
Public propertyMaximumDifference
D^{+}, the maximum difference between the theoretical and empirical CDF's.
Public propertyMinimumDifference
D^{-}, the minimum difference between the theoretical and empirical CDF's.
Public propertyNumberMissing
The number of missing values in the data.
Public propertyNumberOfTies
The number of ties in the data.
Public propertyOneSidedPValue
Probability of the statistic exceeding D under the null hypothesis of equality and against the one-sided alternative. An exact probability is computed if the number of observation is less than or equal to 80, otherwise an approximate probability is computed.
Public propertyTestStatistic
The test statistic, D = \max(D^{+}, D^{-}).
Public propertyTwoSidedPValue
Probability of the statistic exceeding D under the null hypothesis of equality and against the two-sided alternative.
Public propertyZ
The normalized D statistic without the continuity correction applied.
Top
Remarks

The hypotheses tested follow:


            \begin{array}{ll}
            H_0:~ F(x) = F^{*}(x)   & H_1:~F(x) \ne F^{*}(x) \\
            H_0:~ F(x) \ge F^{*}(x) & H_1:~F(x) \lt F^{*}(x) \\
            H_0:~ F(x) \le F^{*}(x) & H_1:~F(x) \gt F^{*}(x)
            \end{array}
where F is the cumulative distribution function (CDF) of the random variable, and the theoretical cdf, F^{*}, is specified via the user-supplied function cdf. Let n be the number of observations minus the number of missing observations. The test statistics for both one-sided alternatives D_n^{+} and D_n^{-} and the two-sided D_n alternative are computed as well as an asymptotic z-score and p-values associated with the one-sided and two-sided hypotheses. For n \gt 80, asymptotic p-values are used (see Gibbons 1971). For n \le 80, exact one-sided p-values are computed according to a method given by Conover (1980, page 350). An approximate two-sided test p-value is obtained as twice the one-sided p-value. The approximation is very close for one-sided p-values less than 0.10 and becomes very bad as the one-sided p-values get larger.

The theoretical CDF is assumed to be continuous. If the CDF is not continuous, the statistics D_n^{*} will not be computed correctly.

Estimation of parameters in the theoretical CDF from the sample data will tend to make the p-values associated with the test statistics too liberal. The empirical CDF will tend to be closer to the theoretical CDF than it should be.

No attempt is made to check that all points in the sample are in the support of the theoretical CDF. If all sample points are not in the support of the CDF, the null hypothesis must be rejected.

See Also

Reference

Other Resources