turicreate.svm_classifier.SVMClassifier.evaluate¶
-
SVMClassifier.
evaluate
(dataset, metric='auto', missing_value_action='auto', with_predictions=False)¶ Evaluate the model by making predictions of target values and comparing these to actual values.
Two metrics are used to evaluate SVM. The confusion table contains the cross-tabulation of actual and predicted classes for the target variable. classifier accuracy is the fraction of examples whose predicted and actual classes match.
Parameters: - dataset : SFrame
Dataset of new observations. Must include columns with the same names as the target and features used for model training. Additional columns are ignored.
- metric : str, optional
Name of the evaluation metric. Possible values are:
- ‘auto’ : Returns all available metrics.
- ‘accuracy ‘ : Classification accuracy (micro average).
- ‘precision’ : Precision score (micro average)
- ‘recall’ : Recall score (micro average)
- ‘f1_score’ : F1 score (micro average)
- ‘confusion_matrix’ : An SFrame with counts of possible prediction/true
- label combinations.
- missing_value_action : str, optional
Action to perform when missing values are encountered. This can be one of:
- ‘auto’: Default to ‘impute’
- ‘impute’: Proceed with evaluation by filling in the missing values with the mean of the training data. Missing values are also imputed if an entire column of data is missing during evaluation.
- ‘error’: Do not proceed with evaluation and terminate with an error message.
Returns: - out : dict
Dictionary of evaluation results where the key is the name of the evaluation metric (e.g. accuracy) and the value is the evaluation score.
Examples
>>> data = turicreate.SFrame('https://static.turi.com/datasets/regression/houses.csv') >>> data['is_expensive'] = data['price'] > 30000 >>> model = turicreate.svm_classifier.create(data, ... target='is_expensive', ... features=['bath', 'bedroom', 'size']) >>> results = model.progressvaluate(data) >>> print results['accuracy']