Models

MLModel

class coremltools.models.model.MLModel(model, useCPUOnly=False, is_temp_package=False, mil_program=None, skip_model_load=False, compute_units=ComputeUnit.ALL)[source]

This class defines the minimal interface to a CoreML object in Python.

At a high level, the protobuf specification consists of:

  • Model description: Encodes names and type information of the inputs and outputs to the model.

  • Model parameters: The set of parameters required to represent a specific instance of the model.

  • Metadata: Information about the origin, license, and author of the model.

With this class, you can inspect a CoreML model, modify metadata, and make predictions for the purposes of testing (on select platforms).

See also

predict

Examples

# Load the model
>>> model =  MLModel('HousePricer.mlmodel')

# Set the model metadata
>>> model.author = 'Author'
>>> model.license = 'BSD'
>>> model.short_description = 'Predicts the price of a house in the Seattle area.'

# Get the interface to the model
>>> model.input_description
>>> model.output_description

# Set feature descriptions manually
>>> model.input_description['bedroom'] = 'Number of bedrooms'
>>> model.input_description['bathrooms'] = 'Number of bathrooms'
>>> model.input_description['size'] = 'Size (in square feet)'

# Set
>>> model.output_description['price'] = 'Price of the house'

# Make predictions
>>> predictions = model.predict({'bedroom': 1.0, 'bath': 1.0, 'size': 1240})

# Get the spec of the model
>>> model.spec

# Save the model
>>> model.save('HousePricer.mlmodel')
__init__(model, useCPUOnly=False, is_temp_package=False, mil_program=None, skip_model_load=False, compute_units=ComputeUnit.ALL)[source]

Construct an MLModel from an .mlmodel.

Parameters
model: str or Model_pb2

For MIL, the model must be a path string to the directory containing bundle artifacts (such as weights.bin).

For NeuralNetwork, the model can be a path string (.mlmodel) or Model_pb2.

useCPUOnly: bool

This parameter is deprecated and will be removed in 6.0. Use the compute_units parameter instead.

The compute_units parameter overrides any usages of this parameter.

Set to True to restrict loading of the model to only the CPU. Defaults to False.

is_temp_package: bool

Set to true if the input model package dir is temporary and can be deleted upon destruction of this class.

mil_program: coremltools.converters.mil.Program

Set to the MIL program object, if available. It is available whenever an MLModel object is constructed using the unified converter API coremltools.convert().

skip_model_load: bool

Set to True to prevent coremltools from calling into the Core ML framework to compile and load the model. In that case, the returned model object cannot be used to make a prediction. This flag may be used to load a newer model type on an older Mac, to inspect or load/save the spec.

Example: Loading an ML Program model type on a macOS 11, since an ML Program can be compiled and loaded only from macOS12+.

Defaults to False.

compute_units: coremltools.ComputeUnit
An enum with three possible values:
  • coremltools.ComputeUnit.ALL: Use all compute units available, including the neural engine.

  • coremltools.ComputeUnit.CPU_ONLY: Limit the model to only use the CPU.

  • coremltools.ComputeUnit.CPU_AND_GPU: Use both the CPU and GPU, but not the neural engine.

Notes

Internally this maintains the following:

  • _MLModelProxy: A pybind wrapper around CoreML::Python::Model (see coremltools/coremlpython/CoreMLPython.mm)

  • bundle_path (MIL only): Directory containing all artifacts (.mlmodel, weights, and so on).

Examples

>>> loaded_model = MLModel('my_model_file.mlmodel')
get_spec()[source]

Get a deep copy of the protobuf specification of the model.

Returns
model: Model_pb2

Protobuf specification of the model.

Examples

>>> spec = model.get_spec()
predict(data, useCPUOnly=False, **kwargs)[source]

Return predictions for the model. The kwargs are passed into the model as a dictionary.

Parameters
data: dict[str, value]

Dictionary of data to make predictions from where the keys are the names of the input features.

useCPUOnly: bool

This parameter is deprecated and will be removed in 6.0. Instead, use the compute_units parameter at load time or conversion time (that is, in coremltools.models.MLModel() or coremltools.convert()).

Set to True to restrict computation to use only the CPU. Defaults to False.

Returns
out: dict[str, value]

Predictions as a dictionary where each key is the output feature name.

Examples

>>> data = {'bedroom': 1.0, 'bath': 1.0, 'size': 1240}
>>> predictions = model.predict(data)
save(filename)[source]

Save the model to a .mlmodel format. For an MIL program, the filename is a package directory containing the mlmodel and weights.

Parameters
filename: str

Target filename / bundle directory for the model. Must have the .mlmodel extension

Examples

>>> model.save('my_model_file.mlmodel')
>>> loaded_model = MLModel('my_model_file.mlmodel')

array_feature_extractor

coremltools.models.array_feature_extractor.create_array_feature_extractor(input_features, output_name, extract_indices, output_type=None)[source]

Creates a feature extractor from an input array feature, return

input_features is a list of one (name, array) tuple.

extract_indices is either an integer or a list. If it’s an integer, the output type is by default a double (but may also be an integer). If a list, the output type is an array.

feature_vectorizer

coremltools.models.feature_vectorizer.create_feature_vectorizer(input_features, output_feature_name, known_size_map={})[source]

Create a feature vectorizer from input features. This returns a 2-tuple (spec, num_dimension) for a feature vectorizer that puts everything into a single array with a length equal to the total size of all the input features.

Parameters
input_features: [list of 2-tuples]

Name(s) of the input features, given as a list of ('name', datatype) tuples. The datatypes entry is one of the data types defined in the datatypes module. Allowed datatypes are datatype.Int64, datatype.Double, datatypes.Dictionary, and datatype.Array.

If the feature is a dictionary type, then the dictionary must have integer keys, and the number of dimensions to expand it into must be provided by known_size_map.

Feature indices in the final array are counted sequentially from the from 0 through the total number of features.

output_feature_name: str

The name of the output feature. The type is an Array List of the output features of the network.

known_size_map:

A dictionary mapping the feature name to the expanded size in the final array. This is most useful for specifying the size of sparse vectors given as dictionaries of index to value.

nearest_neighbors

class coremltools.models.nearest_neighbors.builder.KNearestNeighborsClassifierBuilder(input_name, output_name, number_of_dimensions, default_class_label, **kwargs)[source]

Construct a CoreML KNearestNeighborsClassifier specification.

Please see the Core ML Nearest Neighbors protobuf message for more information on KNearestNeighborsClassifier parameters.

Examples

from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder
from coremltools.models.utils import save_spec

# Create a KNearestNeighborsClassifier model that takes 4-dimensional input data and outputs a string label.
>>> builder = KNearestNeighborsClassifierBuilder(input_name='input',
...                                              output_name='output',
...                                              number_of_dimensions=4,
...                                              default_class_label='default_label')

# save the spec by the builder
>>> save_spec(builder.spec, 'knnclassifier.mlmodel')
__init__(input_name, output_name, number_of_dimensions, default_class_label, **kwargs)[source]

Create a KNearestNeighborsClassifierBuilder object.

Parameters
input_name

Name of the model input.

output_name

Name of the output.

number_of_dimensions

Number of dimensions of the input data.

default_class_label

The default class label to use for predictions. Must be either an int64 or a string.

number_of_neighbors

Number of neighbors to use for predictions. Default = 5 with allowed values between 1-1000.

weighting_scheme

Weight function used in prediction. One of 'uniform' (default) or 'inverse_distance'.

index_type

Algorithm to compute nearest neighbors. One of 'linear' (default), or 'kd_tree'.

leaf_size

Leaf size for the kd-tree. Ignored if index type is 'linear'. Default = 30.

add_samples(data_points, labels)[source]

Add some samples to the KNearestNeighborsClassifier model.

Parameters
data_points

List of input data points.

labels

List of corresponding labels.

Returns
None
property author

Get the author for the KNearestNeighborsClassifier model.

Returns
The author
property description

Get the description for the KNearestNeighborsClassifier model.

Returns
The description.
property index_type

Get the index type for the KNearestNeighborsClassifier model.

Returns
The index type.
property is_updatable

Check if the KNearestNeighborsClassifier is updatable.

Returns
Is updatable.
property leaf_size

Get the leaf size for the KNearestNeighborsClassifier.

Returns
The leaf size.
property license

Get the author for the KNearestNeighborsClassifier model.

Returns
The author
property number_of_dimensions

Get the number of dimensions of the input data for the KNearestNeighborsClassifier model.

Returns
Number of dimensions.
property number_of_neighbors

Get the number of neighbors value for the KNearestNeighborsClassifier model.

Returns
The number of neighbors default value.
number_of_neighbors_allowed_range()[source]

Get the range of allowed values for the numberOfNeighbors parameter.

Returns
Tuple of (min_value, max_value) or None if the range hasn’t been set.
number_of_neighbors_allowed_set()[source]

Get the set of allowed values for the numberOfNeighbors parameter.

Returns
Set of allowed values or None if the set of allowed values hasn’t been
populated.
set_index_type(index_type, leaf_size=30)[source]

Set the index type for the KNearestNeighborsClassifier model.

Parameters
index_type

One of [ 'linear', 'kd_tree' ].

leaf_size

For kd_tree indexes, the leaf size to use (default = 30).

Returns
None
set_number_of_neighbors_with_bounds(number_of_neighbors, allowed_range=None, allowed_set=None)[source]

Set the numberOfNeighbors parameter for the KNearestNeighborsClassifier model.

Parameters
allowed_range

Tuple of (min_value, max_value) defining the range of allowed values.

allowed_values

Set of allowed values for the number of neighbors.

Returns
None
property weighting_scheme

Get the weighting scheme for the KNearestNeighborsClassifier model.

Returns
The weighting scheme.

pipeline

Pipeline utils for this package.

class coremltools.models.pipeline.Pipeline(input_features, output_features, training_features=None)[source]

A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs.

This class is the base class for PipelineClassifier and PipelineRegressor, which contain a sequence ending in a classifier or regressor and themselves behave like a classifier or regressor. This class may be used directly for a sequence of feature transformer objects.

__init__(input_features, output_features, training_features=None)[source]

Create a pipeline of models to be executed sequentially.

Parameters
input_features: [list of 2-tuples]

Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the models.datatypes module.

output_features: [list of features]

Name(s) of the output features, given as a list of (‘name’,datatype) tuples. The datatypes entry can be any of the data types defined in the models.datatypes module. All features must be either defined in the inputs or be produced by one of the contained models.

add_model(spec)[source]

Add a protobuf spec or models.MLModel instance to the pipeline.

All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.

Parameters
spec: [MLModel, Model_pb2]

A protobuf spec or MLModel instance containing a model.

set_training_input(training_input)[source]

Set the training inputs of the network spec.

Parameters
training_input: [tuple]

List of training input names and type of the network.

class coremltools.models.pipeline.PipelineClassifier(input_features, class_labels, output_features=None, training_features=None)[source]

A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs. In this case the pipeline itself behaves as a classification model by designating a discrete categorical output feature as its ‘predicted feature’.

__init__(input_features, class_labels, output_features=None, training_features=None)[source]

Create a set of pipeline models given a set of model specs. The last model in this list must be a classifier model.

Parameters
input_features: [list of 2-tuples]

Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the models.datatypes module.

class_labels: [list]

A list of string or integer class labels to use in making predictions. This list must match the class labels in the model outputting the categorical predictedFeatureName

output_features: [list]

A string or a list of two strings specifying the names of the two output features, the first being a class label corresponding to the class with the highest predicted score, and the second being a dictionary mapping each class to its score. If output_features is a string, it specifies the predicted class label and the class scores is set to the default value of “classProbability.”

add_model(spec)[source]

Add a protobuf spec or models.MLModel instance to the pipeline.

All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.

Parameters
spec: [MLModel, Model_pb2]

A protobuf spec or MLModel instance containing a model.

set_training_input(training_input)[source]

Set the training inputs of the network spec.

Parameters
training_input: [tuple]

List of training input names and type of the network.

class coremltools.models.pipeline.PipelineRegressor(input_features, output_features, training_features=None)[source]

A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs. In this case the pipeline itself behaves as a regression model by designating a real valued output feature as its ‘predicted feature’.

__init__(input_features, output_features, training_features=None)[source]

Create a set of pipeline models given a set of model specs. The final output model must be a regression model.

Parameters
input_features: [list of 2-tuples]

Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the models.datatypes module.

output_features: [list of features]

Name(s) of the output features, given as a list of (‘name’,datatype) tuples. The datatypes entry can be any of the data types defined in the models.datatypes module. All features must be either defined in the inputs or be produced by one of the contained models.

add_model(spec)[source]

Add a protobuf spec or models.MLModel instance to the pipeline.

All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.

Parameters
spec: [MLModel, Model_pb2]

A protobuf spec or MLModel instance containing a model.

set_training_input(training_input)[source]

Set the training inputs of the network spec.

Parameters
training_input: [tuple]

List of training input names and type of the network.

tree_ensemble

Tree ensemble builder class to construct CoreML models.

class coremltools.models.tree_ensemble.TreeEnsembleBase[source]

Base class for the tree ensemble builder class. This should be instantiated either through the TreeEnsembleRegressor or TreeEnsembleClassifier classes.

__init__()[source]

High level Python API to build a tree ensemble model for Core ML.

add_branch_node(tree_id, node_id, feature_index, feature_value, branch_mode, true_child_id, false_child_id, relative_hit_rate=None, missing_value_tracks_true_child=False)[source]

Add a branch node to the tree ensemble.

Parameters
tree_id: int

ID of the tree to add the node to.

node_id: int

ID of the node within the tree.

feature_index: int

Index of the feature in the input being split on.

feature_value: double or int

The value used in the feature comparison determining the traversal direction from this node.

branch_mode: str

Branch mode of the node, specifying the condition under which the node referenced by true_child_id is called next.

Must be one of the following:

  • "BranchOnValueLessThanEqual". Traverse to node true_child_id if input[feature_index] <= feature_value, and false_child_id otherwise.

  • "BranchOnValueLessThan". Traverse to node true_child_id if input[feature_index] < feature_value, and false_child_id otherwise.

  • "BranchOnValueGreaterThanEqual". Traverse to node true_child_id if input[feature_index] >= feature_value, and false_child_id otherwise.

  • "BranchOnValueGreaterThan". Traverse to node true_child_id if input[feature_index] > feature_value, and false_child_id otherwise.

  • "BranchOnValueEqual". Traverse to node true_child_id if input[feature_index] == feature_value, and false_child_id otherwise.

  • "BranchOnValueNotEqual". Traverse to node true_child_id if input[feature_index] != feature_value, and false_child_id otherwise.

true_child_id: int

ID of the child under the true condition of the split. An error will be raised at model validation if this does not match the node_id of a node instantiated by add_branch_node or add_leaf_node within this tree_id.

false_child_id: int

ID of the child under the false condition of the split. An error will be raised at model validation if this does not match the node_id of a node instantiated by add_branch_node or add_leaf_node within this tree_id.

relative_hit_rate: float [optional]

When the model is converted compiled by CoreML, this gives hints to Core ML about which node is more likely to be hit on evaluation, allowing for additional optimizations. The values can be on any scale, with the values between child nodes being compared relative to each other.

missing_value_tracks_true_child: bool [optional]

If the training data contains NaN values or missing values, then this flag determines which direction a NaN value traverses.

add_leaf_node(tree_id, node_id, values, relative_hit_rate=None)[source]

Add a leaf node to the tree ensemble.

Parameters
tree_id: int

ID of the tree to add the node to.

node_id: int

ID of the node within the tree.

values: [float | int | list | dict]

Value(s) at the leaf node to add to the prediction when this node is activated. If the prediction dimension of the tree is 1, then the value is specified as a float or integer value.

For multidimensional predictions, the values can be a list of numbers with length matching the dimension of the predictions or a dictionary mapping index to value added to that dimension.

Note that the dimension of any tree must match the dimension given when set_default_prediction_value() is called.

set_default_prediction_value(values)[source]

Set the default prediction value(s).

The values given here form the base prediction value that the values at activated leaves are added to. If values is a scalar, then the output of the tree must also be 1 dimensional; otherwise, values must be a list with length matching the dimension of values in the tree.

Parameters
values: [int | double | list[double]]

Default values for predictions.

set_post_evaluation_transform(value)[source]

Set the post processing transform applied after the prediction value from the tree ensemble.

Parameters
value: str

A value denoting the transform applied. Possible values are:

  • "NoTransform" (default). Do not apply a transform.

  • "Classification_SoftMax".

    Apply a softmax function to the outcome to produce normalized, non-negative scores that sum to 1. The transformation applied to dimension i is equivalent to:

    \[\frac{e^{x_i}}{\sum_j e^{x_j}}\]

    Note: This is the output transformation applied by the XGBoost package with multiclass classification.

  • "Regression_Logistic".

    Applies a logistic transform the predicted value, specifically:

    \[(1 + e^{-v})^{-1}\]

    This is the transformation used in binary classification.

class coremltools.models.tree_ensemble.TreeEnsembleClassifier(features, class_labels, output_features)[source]

Tree Ensemble builder class to construct a Tree Ensemble classification model.

The TreeEnsembleClassifier class constructs a Tree Ensemble model incrementally using methods to add branch and leaf nodes specifying the behavior of the model.

Examples

In the following example, the code saves the model to disk, which is a recommended practice but not required.

>>> input_features = [("a", datatypes.Array(3)), ("b", datatypes.Double())]

>>> tm = TreeEnsembleClassifier(features = input_features, class_labels = [0, 1],
                                output_features = "predicted_class")

>>> # Split on a[2] <= 3
>>> tm.add_branch_node(0, 0, 2, 3, "BranchOnValueLessThanEqual", 1, 2)

>>> # Add leaf to the true branch of node 0 that subtracts 1.
>>> tm.add_leaf_node(0, 1, -1)

>>> # Add split on b == 0 to the false branch of node 0.
>>> tm.add_branch_node(0, 2, 3, 0, "BranchOnValueEqual", 3, 4)

>>> # Add leaf to the true branch of node 2 that adds 1 to the result.
>>> tm.add_leaf_node(0, 3, 1)

>>> # Add leaf to the false branch of node 2 that subtracts 1 from the result.
>>> tm.add_leaf_node(0, 4, -1)

>>> # Put in a softmax transform to translate these into probabilities.
>>> tm.set_post_evaluation_transform("Classification_SoftMax")

>>> tm.set_default_prediction_value([0, 0])

>>> # save the model to a .mlmodel file
>>> model_path = './tree.mlmodel'
>>> coremltools.models.utils.save_spec(tm.spec, model_path)

>>> # load the .mlmodel
>>> mlmodel = coremltools.models.MLModel(model_path)

>>> # make predictions
>>> test_input = {
>>>     'a': np.array([0, 1, 2]).astype(np.float32),
>>>     "b": 3.0,
>>> }
>>> predictions = mlmodel.predict(test_input)
__init__(features, class_labels, output_features)[source]

Create a tree ensemble classifier model.

Parameters
features: [list of features]

Name(s) of the input features, given as a list of ('name', datatype) tuples. The features are one of models.datatypes.Int64, datatypes.Double, or models.datatypes.Array. Feature indices in the nodes are counted sequentially from 0 through the features.

class_labels: [list]

A list of string or integer class labels to use in making predictions. The length of this must match the dimension of the tree model.

output_features: [list]

A string or a list of two strings specifying the names of the two output features, the first being a class label corresponding to the class with the highest predicted score, and the second being a dictionary mapping each class to its score. If output_features is a string, it specifies the predicted class label and the class scores is set to the default value of "classProbability".

class coremltools.models.tree_ensemble.TreeEnsembleRegressor(features, target)[source]

Tree Ensemble builder class to construct a Tree Ensemble regression model.

The TreeEnsembleRegressor class constructs a Tree Ensemble model incrementally using methods to add branch and leaf nodes specifying the behavior of the model.

Examples

In the following example, the code saves the model to disk, which is a recommended practice but not required.

>>> # Required inputs
>>> import coremltools
>>> from coremltools.models import datatypes
>>> from coremltools.models.tree_ensemble import TreeEnsembleRegressor
>>> import numpy as np

>>> # Define input features
>>> input_features = [("a", datatypes.Array(3)), ("b", (datatypes.Double()))]

>>> # Define output_features
>>> output_features = [("predicted_values", datatypes.Double())]

>>> tm = TreeEnsembleRegressor(features = input_features, target = output_features)

>>> # Split on a[2] <= 3
>>> tm.add_branch_node(0, 0, 2, 3, "BranchOnValueLessThanEqual", 1, 2)

>>> # Add leaf to the true branch of node 0 that subtracts 1.
>>> tm.add_leaf_node(0, 1, -1)

>>> # Add split on b == 0 to the false branch of node 0, which is index 3
>>> tm.add_branch_node(0, 2, 3, 0, "BranchOnValueEqual", 3, 4)

>>> # Add leaf to the true branch of node 2 that adds 1 to the result.
>>> tm.add_leaf_node(0, 3, 1)

>>> # Add leaf to the false branch of node 2 that subtracts 1 from the result.
>>> tm.add_leaf_node(0, 4, -1)

>>> tm.set_default_prediction_value([0, 0])

>>> # save the model to a .mlmodel file
>>> model_path = './tree.mlmodel'
>>> coremltools.models.utils.save_spec(tm.spec, model_path)

>>> # load the .mlmodel
>>> mlmodel = coremltools.models.MLModel(model_path)

>>> # make predictions
>>> test_input = {
>>>     'a': np.array([0, 1, 2]).astype(np.float32),
>>>     "b": 3.0,
>>> }
>>> predictions = mlmodel.predict(test_input)
__init__(features, target)[source]

Create a Tree Ensemble regression model that takes one or more input features and maps them to an output feature.

Parameters
features: [list of features]

Name(s) of the input features, given as a list of ('name', datatype) tuples. The features are one of models.datatypes.Int64, datatypes.Double, or models.datatypes.Array. Feature indices in the nodes are counted sequentially from 0 through the features.

target: (default = None)

Name of the target feature predicted.

utils

Utilities for the entire package.

coremltools.models.utils.convert_double_to_float_multiarray_type(spec)[source]

Convert all double multiarrays feature descriptions (input, output, training input) to float multiarrays

Parameters
spec: Model_pb

The specification containing the multiarrays types to convert

Examples

# In-place convert multiarray type of spec
>>> spec = mlmodel.get_spec()
>>> coremltools.utils.convert_double_to_float_multiarray_type(spec)
>>> model = coremltools.models.MLModel(spec)
coremltools.models.utils.evaluate_classifier(model, data, target='target', verbose=False)[source]

Evaluate a Core ML classifier model and compare against predictions from the original framework (for testing correctness of conversion). Use this evaluation for models that don’t deal with probabilities.

Parameters
filename: list of str or list of MLModel

File from where to load the model from (OR) a loaded version of the MLModel.

data: list of str or list of Dataframe

Test data on which to evaluate the models (dataframe, or path to a csv file).

target: str

Column to interpret as the target column

verbose: bool

Set to true for a more verbose output.

Examples

>>> metrics =  coremltools.utils.evaluate_classifier(spec, 'data_and_predictions.csv', 'target')
>>> print(metrics)
{"samples": 10, num_errors: 0}
coremltools.models.utils.evaluate_classifier_with_probabilities(model, data, probabilities='probabilities', verbose=False)[source]

Evaluate a classifier specification for testing.

Parameters
filename: [str | Model]

File from where to load the model from (OR) a loaded version of the MLModel.

data: [str | Dataframe]

Test data on which to evaluate the models (dataframe, or path to a csv file).

probabilities: str

Column to interpret as the probabilities column

verbose: bool

Verbosity levels of the predictions.

coremltools.models.utils.evaluate_regressor(model, data, target='target', verbose=False)[source]

Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion).

Parameters
model: MLModel or str

A loaded MLModel or a path to a saved MLModel

data: Dataframe

Test data on which to evaluate the models

target: str

Name of the column in the dataframe that must be interpreted as the target column.

verbose: bool

Set to true for a more verbose output.

Examples

>>> metrics = coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target')
>>> print(metrics)
{"samples": 10, "rmse": 0.0, max_error: 0.0}
coremltools.models.utils.evaluate_transformer(model, input_data, reference_output, verbose=False)[source]

Evaluate a transformer specification for testing.

Parameters
spec: list of str or list of MLModel

File from where to load the Model from (OR) a loaded version of MLModel.

input_data: list of dict

Test data on which to evaluate the models.

reference_output: list of dict

Expected results for the model.

verbose: bool

Verbosity levels of the predictions.

Examples

>>> input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}]
>>> expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}]
>>> metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)
coremltools.models.utils.load_spec(filename)[source]

Load a protobuf model specification from file.

Parameters
filename: str

Location on disk (a valid file path) from which the file is loaded as a protobuf spec.

Returns
model_spec: Model_pb

Protobuf representation of the model

See also

save_spec

Examples

>>> spec = coremltools.utils.load_spec('HousePricer.mlmodel')
coremltools.models.utils.rename_feature(spec, current_name, new_name, rename_inputs=True, rename_outputs=True)[source]

Rename a feature in the specification.

Parameters
spec: Model_pb

The specification containing the feature to rename.

current_name: str

Current name of the feature. If this feature doesn’t exist, the rename is a no-op.

new_name: str

New name of the feature.

rename_inputs: bool

Search for current_name only in the input features (i.e ignore output features)

rename_outputs: bool

Search for current_name only in the output features (i.e ignore input features)

Examples

# In-place rename of spec
>>> coremltools.utils.rename_feature(spec, 'old_feature', 'new_feature_name')
coremltools.models.utils.save_spec(spec, filename, auto_set_specification_version=False)[source]

Save a protobuf model specification to file.

Parameters
spec: Model_pb

Protobuf representation of the model

filename: str

File path where the spec gets saved.

auto_set_specification_version: bool

If true, will always try to set specification version automatically.

See also

load_spec

Examples

>>> coremltools.utils.save_spec(spec, 'HousePricer.mlmodel')