Utilities¶
Utilities for the entire package.
-
coremltools.models.utils.
convert_double_to_float_multiarray_type
(spec)¶ Convert all double multiarrays feature descriptions (input, output, training input) to float multiarrays
Parameters: - spec: Model_pb
The specification containing the multiarrays types to convert
Examples
# In-place convert multiarray type of spec >>> spec = mlmodel.get_spec() >>> coremltools.utils.convert_double_to_float_multiarray_type(spec) >>> model = coremltools.models.MLModel(spec)
-
coremltools.models.utils.
evaluate_classifier
(model, data, target='target', verbose=False)¶ Evaluate a Core ML classifier model and compare against predictions from the original framework (for testing correctness of conversion). Use this evaluation for models that don’t deal with probabilities.
Parameters: - filename: list of str or list of MLModel
File from where to load the model from (OR) a loaded version of the MLModel.
- data: list of str or list of Dataframe
Test data on which to evaluate the models (dataframe, or path to a csv file).
- target: str
Column to interpret as the target column
- verbose: bool
Set to true for a more verbose output.
Examples
>>> metrics = coremltools.utils.evaluate_classifier(spec, 'data_and_predictions.csv', 'target') >>> print(metrics) {"samples": 10, num_errors: 0}
-
coremltools.models.utils.
evaluate_classifier_with_probabilities
(model, data, probabilities='probabilities', verbose=False)¶ Evaluate a classifier specification for testing.
Parameters: - filename: [str | Model]
File from where to load the model from (OR) a loaded version of the MLModel.
- data: [str | Dataframe]
Test data on which to evaluate the models (dataframe, or path to a csv file).
- probabilities: str
Column to interpret as the probabilities column
- verbose: bool
Verbosity levels of the predictions.
-
coremltools.models.utils.
evaluate_regressor
(model, data, target='target', verbose=False)¶ Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion)
Parameters: - filename: list of str or list of MLModel
File path from which to load the MLModel from (OR) a loaded version of MLModel.
- data: list of str or list of Dataframe
Test data on which to evaluate the models (dataframe, or path to a .csv file).
- target: str
Name of the column in the dataframe that must be interpreted as the target column.
- verbose: bool
Set to true for a more verbose output.
See also
Examples
>>> metrics = coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target') >>> print(metrics) {"samples": 10, "rmse": 0.0, max_error: 0.0}
-
coremltools.models.utils.
evaluate_transformer
(model, input_data, reference_output, verbose=False)¶ Evaluate a transformer specification for testing.
Parameters: - spec: list of str or list of MLModel
File from where to load the Model from (OR) a loaded version of MLModel.
- input_data: list of dict
Test data on which to evaluate the models.
- reference_output: list of dict
Expected results for the model.
- verbose: bool
Verbosity levels of the predictions.
See also
Examples
>>> input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}] >>> expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}] >>> metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)
-
coremltools.models.utils.
has_custom_layer
(spec)¶ Returns true if the given protobuf specification has a custom layer, and false otherwise.
Parameters: - spec: mlmodel spec
Returns: - True if the protobuf specification contains a neural network with a custom layer, False otherwise.
-
coremltools.models.utils.
is_macos
()¶ Returns True if current platform is MacOS, False otherwise.
-
coremltools.models.utils.
load_spec
(filename)¶ Load a protobuf model specification from file.
Parameters: - filename: str
Location on disk (a valid file path) from which the file is loaded as a protobuf spec.
Returns: - model_spec: Model_pb
Protobuf representation of the model
See also
Examples
>>> spec = coremltools.utils.load_spec('HousePricer.mlmodel')
-
coremltools.models.utils.
macos_version
()¶ Returns macOS version as a tuple of integers, making it easy to do proper version comparisons. On non-Macs, it returns an empty tuple.
-
coremltools.models.utils.
rename_feature
(spec, current_name, new_name, rename_inputs=True, rename_outputs=True)¶ Rename a feature in the specification.
Parameters: - spec: Model_pb
The specification containing the feature to rename.
- current_name: str
Current name of the feature. If this feature doesn’t exist, the rename is a no-op.
- new_name: str
New name of the feature.
- rename_inputs: bool
Search for current_name only in the input features (i.e ignore output features)
- rename_outputs: bool
Search for current_name only in the output features (i.e ignore input features)
Examples
# In-place rename of spec >>> coremltools.utils.rename_feature(spec, 'old_feature', 'new_feature_name')
-
coremltools.models.utils.
save_spec
(spec, filename, auto_set_specification_version=False)¶ Save a protobuf model specification to file.
Parameters: - spec: Model_pb
Protobuf representation of the model
- filename: str
File path where the spec gets saved.
- auto_set_specification_version: bool
If true, will always try to set specification version automatically.
See also
Examples
>>> coremltools.utils.save_spec(spec, 'HousePricer.mlmodel')