turicreate.sound_classifier.SoundClassifier.evaluate¶
-
SoundClassifier.
evaluate
(dataset, metric='auto', verbose=True, batch_size=64)¶ Evaluate the model by making predictions of target values and comparing these to actual values.
Parameters: - dataset : SFrame
Dataset to use for evaluation, must include a column with the same name as the features used for model training. Additional columns are ignored.
- metric : str, optional
Name of the evaluation metric. Possible values are:
- ‘auto’ : Returns all available metrics.
- ‘accuracy’ : Classification accuracy (micro average).
- ‘auc’ : Area under the ROC curve (macro average)
- ‘precision’ : Precision score (macro average)
- ‘recall’ : Recall score (macro average)
- ‘f1_score’ : F1 score (macro average)
- ‘log_loss’ : Log loss
- ‘confusion_matrix’ : An SFrame with counts of possible
- prediction/true label combinations.
- ‘roc_curve’ : An SFrame containing information needed for an
- ROC curve
- verbose : bool, optional
If True, prints progress updates and model details.
- batch_size : int, optional
If you are getting memory errors, try decreasing this value. If you have a powerful computer, increasing this value may improve performance.
Returns: - out : dict
Dictionary of evaluation results where the key is the name of the evaluation metric (e.g. accuracy) and the value is the evaluation score.
Examples
>>> results = model.evaluate(data) >>> print results['accuracy']