>>> from mlpack import hoeffding_tree
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the 'training' and 'labels' parameters, respectively. Optionally, if 'labels' is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the 'batch_mode' option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the 'output_model' output parameter. A model may be loaded from file for further training or testing with the 'input_model' parameter.
Test data may be specified with the 'test' parameter, and if performance statistics are desired for that test set, labels may be specified with the 'test_labels' parameter. Predictions for each test point may be saved with the 'predictions' output parameter, and class probabilities for each prediction may be saved with the 'probabilities' output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data 'dataset', saving the trained tree to 'tree', the following command may be used:
>>> hoeffding_tree(training=dataset, confidence=0.99)
>>> tree = output['output_model']
Then, this tree may be used to make predictions on the test set 'test_set', saving the predictions into 'predictions' and the class probabilities into 'class_probs' with the following command:
>>> hoeffding_tree(input_model=tree, test=test_set)
>>> predictions = output['predictions']
>>> class_probs = output['probabilities']
- batch_mode (bool): If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime.
- bins (int): If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. Default value 10.
- confidence (float): Confidence before splitting (between 0 and 1). Default value 0.95.
- copy_all_inputs (bool): If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code.
- info_gain (bool): If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds.
- input_model (mlpack.HoeffdingTreeModelType): Input trained Hoeffding tree model.
- labels (numpy vector or array, int/long dtype): Labels for training dataset.
- max_samples (int): Maximum number of samples before splitting. Default value 5000.
- min_samples (int): Minimum number of samples before splitting. Default value 100.
- numeric_split_strategy (string): The splitting strategy to use for numeric features: 'domingos' or 'binary'. Default value binary.
- observations_before_binning (int): If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. Default value 100.
- passes (int): Number of passes to take over the dataset. Default value 1.
- test (mlpack.TUPLE_TYPEType): Testing dataset (may be categorical).
- test_labels (numpy vector or array, int/long dtype): Labels of test data.
- training (mlpack.TUPLE_TYPEType): Training dataset (may be categorical).
- verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
The return value from the binding is a dict containing the following elements:
- output_model (mlpack.HoeffdingTreeModelType): Output for trained Hoeffding tree model.
- predictions (numpy vector, int dtype): Matrix to output label predictions for test data into.
- probabilities (numpy matrix, float dtype): In addition to predicting labels, provide rediction probabilities in this matrix.