turicreate.image_classifier.ImageClassifier.evaluate

ImageClassifier.evaluate(dataset, metric='auto', verbose=True, batch_size=64)

Evaluate the model by making predictions of target values and comparing these to actual values.

Parameters:
dataset : SFrame

Dataset of new observations. Must include columns with the same names as the target and features used for model training. Additional columns are ignored.

metric : str, optional

Name of the evaluation metric. Possible values are:

  • ‘auto’ : Returns all available metrics.
  • ‘accuracy’ : Classification accuracy (micro average).
  • ‘auc’ : Area under the ROC curve (macro average)
  • ‘precision’ : Precision score (macro average)
  • ‘recall’ : Recall score (macro average)
  • ‘f1_score’ : F1 score (macro average)
  • ‘log_loss’ : Log loss
  • ‘confusion_matrix’ : An SFrame with counts of possible prediction/true label combinations.
  • ‘roc_curve’ : An SFrame containing information needed for an ROC curve

For more flexibility in calculating evaluation metrics, use the evaluation module.

verbose : bool, optional

If True, prints progress updates and model details.

batch_size : int, optional

If you are getting memory errors, try decreasing this value. If you have a powerful computer, increasing this value may improve performance.

Returns:
out : dict

Dictionary of evaluation results where the key is the name of the evaluation metric (e.g. accuracy) and the value is the evaluation score.

See also

create, predict, classify

Examples

>>> results = model.evaluate(data)
>>> print results['accuracy']