Class C45
- All Implemented Interfaces:
Serializable,Cloneable
Generates a decision tree using the C4.5 algorithm for a categorical response variable and categorical or quantitative predictor variables. The C4.5 procedure (Quinlan, 1995) partitions the sample space using an information gain or a gain ratio as the splitting criterion. Specifically, the entropy or uncertainty in the response variable with C categories over the full training sample S is defined as
$$E(S)=-\sum_{i=1}^Cp_i\mbox{log}\left(p_i\right) $$Where \(p_i=\mbox{Pr}[Y=i|S]\) is the probability that the response takes on category i on the dataset S. This measure is widely known as the Shannon Entropy. Splitting the dataset further may either increase or decrease the entropy in the response variable. For example, the entropy of Y over a partitioning of S by X, a variable with K categories, is given by
$$E(S,X)=-\sum_{k=1}^{K}\sum_{i=1}^{C_k}p\left(S_k \right)E\left(S_k\right)$$If any split defined by the values of a categorical predictor decreases the entropy in Y, then it is said to yield information gain:
$$g(S,X)=E(S)-E(S,X)$$The best splitting variable according to the information gain criterion is the variable yielding the largest information gain, calculated in this manner. A modified criterion is the gain ratio:
$$gR(S,X)=\frac{E(S)-E(S,X)}{E_X\left(S\right)} $$where
$$E_x\left(S\right)=-\sum_{k=1}^K\nu_k\mbox{log} \left(\nu_k\right)$$with
$$\nu_k=\mbox{Pr}[X=k|S]$$Note that EX(S) is just the entropy of the variable X over S. The gain ratio is thought to be less biased toward predictors with many categories. C4.5 treats the continuous variable similarly, except that only binary splits of the form \(X\le d\) and \(X\gt d\) are considered, where d is a value in the range of X on S. The best split is determined by the split variable and split point that gives the largest criterion value. It is possible that no variable meets the threshold for further splitting at the current node, in which case growing stops and the node becomes a terminal node. Otherwise, the node is split creating two or more child nodes. Then, using the dataset partition defined by the splitting variable and split value, the very same procedure is repeated for each child node. Thus a collection of nodes and child nodes are generated, or, in other words, the tree is grown. The growth stops after one or more different conditions are met.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from class com.imsl.datamining.decisionTree.DecisionTreeInfoGain
DecisionTreeInfoGain.GainCriteriaNested classes/interfaces inherited from class com.imsl.datamining.decisionTree.DecisionTree
DecisionTree.MaxTreeSizeExceededException, DecisionTree.PruningFailedToConvergeException, DecisionTree.PureNodeExceptionNested classes/interfaces inherited from class com.imsl.datamining.PredictiveModel
PredictiveModel.CloneNotSupportedException, PredictiveModel.PredictiveModelException, PredictiveModel.StateChangeException, PredictiveModel.SumOfProbabilitiesNotOneException, PredictiveModel.VariableType -
Constructor Summary
ConstructorsConstructorDescriptionC45(double[][] xy, int responseColumnIndex, PredictiveModel.VariableType[] varType) Constructs aC45object for a single response variable and multiple predictor variables.Constructs a copy of the inputC45decision tree. -
Method Summary
Modifier and TypeMethodDescriptionclone()Clones aC45decision tree.protected intselectSplitVariable(double[][] xy, double[] classCounts, double[] parentFreq, double[] splitValue, double[] splitCriterionValue, int[] splitPartition) Selects the split variable for the present node using the C45 method.Methods inherited from class com.imsl.datamining.decisionTree.DecisionTreeInfoGain
getCountXY, getCriteriaValueCategorical, setGainCriteria, setUseRatio, useGainRatioMethods inherited from class com.imsl.datamining.decisionTree.DecisionTree
fitModel, getCostComplexityValues, getDecisionTree, getFittedMeanSquaredError, getMaxDepth, getMaxNodes, getMeanSquaredPredictionError, getMinCostComplexityValue, getMinObsPerChildNode, getMinObsPerNode, getNodeAssigments, getNumberOfComplexityValues, getNumberOfRandomFeatures, isAutoPruningFlag, isRandomFeatureSelection, predict, predict, predict, printDecisionTree, printDecisionTree, pruneTree, setAutoPruningFlag, setConfiguration, setCostComplexityValues, setMaxDepth, setMaxNodes, setMinCostComplexityValue, setMinObsPerChildNode, setMinObsPerNode, setNumberOfRandomFeatures, setRandomFeatureSelectionMethods inherited from class com.imsl.datamining.PredictiveModel
getClassCounts, getClassErrors, getClassErrors, getClassLabels, getClassProbabilities, getCostMatrix, getMaxNumberOfCategories, getMaxNumberOfIterations, getNumberOfClasses, getNumberOfColumns, getNumberOfMissing, getNumberOfPredictors, getNumberOfRows, getNumberOfUniquePredictorValues, getPredictorIndexes, getPredictorTypes, getPrintLevel, getPriorProbabilities, getRandomObject, getResponseColumnIndex, getResponseVariableAverage, getResponseVariableMostFrequentClass, getResponseVariableType, getTotalWeight, getVariableType, getWeights, getXY, isConstantSeries, isMustFitModel, isUserFixedNClasses, setClassCounts, setClassLabels, setClassProbabilities, setCostMatrix, setMaxNumberOfCategories, setMaxNumberOfIterations, setMustFitModel, setNumberOfClasses, setPredictorIndex, setPredictorTypes, setPrintLevel, setPriorProbabilities, setRandomObject, setResponseColumnIndex, setTrainingData, setVariableType, setWeights
-
Constructor Details
-
C45
Constructs aC45object for a single response variable and multiple predictor variables.- Parameters:
xy- adoublematrix containing the training data and associated response valuesresponseColumnIndex- anintspecifying the column index of the response variablevarType- aPredictiveModel.VariableTypearray containing the type of each variable
-
C45
Constructs a copy of the inputC45decision tree.- Parameters:
c45Model- aC45decision tree
-
-
Method Details
-
clone
Clones aC45decision tree.- Specified by:
clonein classPredictiveModel- Returns:
- a clone of the
C45decision tree
-
selectSplitVariable
protected int selectSplitVariable(double[][] xy, double[] classCounts, double[] parentFreq, double[] splitValue, double[] splitCriterionValue, int[] splitPartition) Selects the split variable for the present node using the C45 method.- Specified by:
selectSplitVariablein classDecisionTreeInfoGain- Parameters:
xy- adoublematrix containing the dataclassCounts- adoublearray containing the counts for each class of the response variable, when it is categoricalparentFreq- adoublearray used to indicate which subset of the observations belong in the current nodesplitValue- adoublearray representing the resulting split point if the selected variable is quantitativesplitCriterionValue- adouble, the value of the criterion used to determine the splitting variablesplitPartition- anintarray indicating the resulting split partition if the selected variable is categorical- Returns:
- an
intspecifying the column index of the split variable inthis.getPredictorIndexes()
-