mlpack.decision_tree

decision_tree(...)Decision tree

>>> from mlpack import decision_tree

Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.

The training file and associated labels are specified with the 'training' and 'labels' parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if 'labels' is not specified, the labels are assumed to be the last dimension of the training dataset.

When a model is trained, the 'output_model' output parameter may be used to save the trained model. A model may be loaded for predictions with the 'input_model'' parameter. The 'input_model' parameter may not be specified when the 'training' parameter is specified. The 'minimum_leaf_size' parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The 'minimum_gain_split' parameter specifies the minimum gain that is needed for the node to split. If 'print_training_error' is specified, the training error will be printed.

Test data may be specified with the 'test' parameter, and if performance numbers are desired for that test set, labels may be specified with the 'test_labels' parameter. Predictions for each test point may be saved via the 'predictions' output parameter. Class probabilities for each prediction may be saved with the 'probabilities' output parameter.

For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in 'data' with labels 'labels', saving the output model to 'tree' and printing the training error, one could call

>>> decision_tree(training=data, labels=labels, minimum_leaf_size=20,

minimum_gain_split=0.001, print_training_error=True)

>>> tree = output['output_model']

Then, to use that model to classify points in 'test_set' and print the test error given the labels 'test_labels' using that model, while saving the predictions for each point to 'predictions', one could call

>>> decision_tree(input_model=tree, test=test_set, test_labels=test_labels)

>>> predictions = output['predictions']

## input options

- copy_all_inputs (bool): If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code.
- input_model (mlpack.DecisionTreeModelType): Pre-trained decision tree, to be used with test points.
- labels (numpy vector or array, int/long dtype): Training labels.
- minimum_gain_split (float): Minimum gain for node splitting. Default value 1e-07.
- minimum_leaf_size (int): Minimum number of points in a leaf. Default value 20.
- print_training_error (bool): Print the training error.
- test (mlpack.TUPLE_TYPEType): Testing dataset (may be categorical).
- test_labels (numpy matrix or arraylike, int/long dtype): Test point labels, if accuracy calculation is desired.
- training (mlpack.TUPLE_TYPEType): Training dataset (may be categorical).
- verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
- weights (numpy matrix or arraylike, float dtype): The weight of labels

## output options

The return value from the binding is a dict containing the following elements:

- output_model (mlpack.DecisionTreeModelType): Output for trained decision tree.
- predictions (numpy vector, int dtype): Class predictions for each test point.
- probabilities (numpy matrix, float dtype): Class probabilities for each test point.