Models¶
MLModel¶
-
class
coremltools.models.model.
MLModel
(model, useCPUOnly=False)¶ This class defines the minimal interface to a CoreML object in Python.
At a high level, the protobuf specification consists of:
Model description: Encodes names and type information of the inputs and outputs to the model.
Model parameters: The set of parameters required to represent a specific instance of the model.
Metadata: Information about the origin, license, and author of the model.
With this class, you can inspect a CoreML model, modify metadata, and make predictions for the purposes of testing (on select platforms).
See also
Examples
# Load the model >>> model = MLModel('HousePricer.mlmodel') # Set the model metadata >>> model.author = 'Author' >>> model.license = 'BSD' >>> model.short_description = 'Predicts the price of a house in the Seattle area.' # Get the interface to the model >>> model.input_description >>> model.output_description # Set feature descriptions manually >>> model.input_description['bedroom'] = 'Number of bedrooms' >>> model.input_description['bathrooms'] = 'Number of bathrooms' >>> model.input_description['size'] = 'Size (in square feet)' # Set >>> model.output_description['price'] = 'Price of the house' # Make predictions >>> predictions = model.predict({'bedroom': 1.0, 'bath': 1.0, 'size': 1240}) # Get the spec of the model >>> model.spec # Save the model >>> model.save('HousePricer.mlmodel')
-
__init__
(model, useCPUOnly=False)¶ Construct an MLModel from a .mlmodel
- Parameters
- model: str or Model_pb2
If a string is given it should be the location of the .mlmodel to load.
- useCPUOnly: bool
Set to true to restrict loading of model on CPU Only. Defaults to False.
Examples
>>> loaded_model = MLModel('my_model_file.mlmodel')
-
get_spec
()¶ Get a deep copy of the protobuf specification of the model.
- Returns
- model: Model_pb2
Protobuf specification of the model.
Examples
>>> spec = model.get_spec()
-
predict
(data, useCPUOnly=False, **kwargs)¶ Return predictions for the model. The kwargs gets passed into the model as a dictionary.
- Parameters
- data: dict[str, value]
Dictionary of data to make predictions from where the keys are the names of the input features.
- useCPUOnly: bool
Set to true to restrict computation to use only the CPU. Defaults to False.
- Returns
- out: dict[str, value]
Predictions as a dictionary where each key is the output feature name.
Examples
>>> data = {'bedroom': 1.0, 'bath': 1.0, 'size': 1240} >>> predictions = model.predict(data)
-
save
(filename)¶ Save the model to a .mlmodel format.
- Parameters
- filename: str
Target filename for the model.
See also
coremltools.utils.load_model
Examples
>>> model.save('my_model_file.mlmodel') >>> loaded_model = MLModel('my_model_file.mlmodel')
array_feature_extractor¶
-
coremltools.models.array_feature_extractor.
create_array_feature_extractor
(input_features, output_name, extract_indices, output_type=None)¶ Creates a feature extractor from an input array feature, return
input_features is a list of one (name, array) tuple.
extract_indices is either an integer or a list. If it’s an integer, the output type is by default a double (but may also be an integer). If a list, the output type is an array.
feature_vectorizer¶
-
coremltools.models.feature_vectorizer.
create_feature_vectorizer
(input_features, output_feature_name, known_size_map={})¶ Create a feature vectorizer from input features. This returns a 2-tuple
(spec, num_dimension)
for a feature vectorizer that puts everything into a single array with a length equal to the total size of all the input features.- Parameters
- input_features: [list of 2-tuples]
Name(s) of the input features, given as a list of
('name', datatype)
tuples. The datatypes entry is one of the data types defined in thedatatypes
module. Alloweddatatypes
aredatatype.Int64
,datatype.Double
,datatypes.Dictionary
, anddatatype.Array
.If the feature is a dictionary type, then the dictionary must have integer keys, and the number of dimensions to expand it into must be provided by
known_size_map
.Feature indices in the final array are counted sequentially from the from 0 through the total number of features.
- output_feature_name: str
The name of the output feature. The type is an Array List of the output features of the network.
- known_size_map:
A dictionary mapping the feature name to the expanded size in the final array. This is most useful for specifying the size of sparse vectors given as dictionaries of index to value.
nearest_neighbors¶
-
class
coremltools.models.nearest_neighbors.builder.
KNearestNeighborsClassifierBuilder
(input_name, output_name, number_of_dimensions, default_class_label, **kwargs)¶ Construct a CoreML KNearestNeighborsClassifier specification.
Please see the Core ML Nearest Neighbors protobuf message for more information on KNearestNeighborsClassifier parameters.
Examples
from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder from coremltools.models.utils import save_spec # Create a KNearestNeighborsClassifier model that takes 4-dimensional input data and outputs a string label. >>> builder = KNearestNeighborsClassifierBuilder(input_name='input', ... output_name='output', ... number_of_dimensions=4, ... default_class_label='default_label') # save the spec by the builder >>> save_spec(builder.spec, 'knnclassifier.mlmodel')
-
__init__
(input_name, output_name, number_of_dimensions, default_class_label, **kwargs)¶ Create a KNearestNeighborsClassifierBuilder object.
- Parameters
- input_name
Name of the model input.
- output_name
Name of the output.
- number_of_dimensions
Number of dimensions of the input data.
- default_class_label
The default class label to use for predictions. Must be either an int64 or a string.
- number_of_neighbors
Number of neighbors to use for predictions. Default = 5 with allowed values between 1-1000.
- weighting_scheme
Weight function used in prediction. One of
'uniform'
(default) or'inverse_distance'
.- index_type
Algorithm to compute nearest neighbors. One of
'linear'
(default), or'kd_tree'
.- leaf_size
Leaf size for the kd-tree. Ignored if index type is
'linear'
. Default = 30.
-
static
_is_valid_number_type
(obj)¶ Checks if the object is a valid number type.
- Parameters
- obj
The object to check.
- Returns
- True if a valid number type, False otherwise.
-
static
_is_valid_text_type
(obj)¶ Checks if the object is a valid text type.
- Parameters
- obj
The object to check.
- Returns
- True if a valid text type, False otherwise.
-
_validate_label_types
(labels)¶ Ensure the label types matched the expected types.
- Parameters
- spec
The spec.
- labels
The list of labels.
- Returns
- None, throws a TypeError if not expected.
-
add_samples
(data_points, labels)¶ Add some samples to the KNearestNeighborsClassifier model.
- Parameters
- data_points
List of input data points.
- labels
List of corresponding labels.
- Returns
- None
Get the author for the KNearestNeighborsClassifier model.
- Returns
- The author
-
property
description
¶ Get the description for the KNearestNeighborsClassifier model.
- Returns
- The description.
-
property
index_type
¶ Get the index type for the KNearestNeighborsClassifier model.
- Returns
- The index type.
-
property
is_updatable
¶ Check if the KNearestNeighborsClassifier is updatable.
- Returns
- Is updatable.
-
property
leaf_size
¶ Get the leaf size for the KNearestNeighborsClassifier.
- Returns
- The leaf size.
-
property
license
¶ Get the author for the KNearestNeighborsClassifier model.
- Returns
- The author
-
property
number_of_dimensions
¶ Get the number of dimensions of the input data for the KNearestNeighborsClassifier model.
- Returns
- Number of dimensions.
-
property
number_of_neighbors
¶ Get the number of neighbors value for the KNearestNeighborsClassifier model.
- Returns
- The number of neighbors default value.
-
number_of_neighbors_allowed_range
()¶ Get the range of allowed values for the numberOfNeighbors parameter.
- Returns
- Tuple of (
min_value
,max_value
) orNone
if the range hasn’t been set.
- Tuple of (
-
number_of_neighbors_allowed_set
()¶ Get the set of allowed values for the numberOfNeighbors parameter.
- Returns
- Set of allowed values or
None
if the set of allowed values hasn’t been - populated.
- Set of allowed values or
-
set_index_type
(index_type, leaf_size=30)¶ Set the index type for the KNearestNeighborsClassifier model.
- Parameters
- index_type
One of [
'linear'
,'kd_tree'
].- leaf_size
For kd_tree indexes, the leaf size to use (default = 30).
- Returns
- None
-
set_number_of_neighbors_with_bounds
(number_of_neighbors, allowed_range=None, allowed_set=None)¶ Set the numberOfNeighbors parameter for the KNearestNeighborsClassifier model.
- Parameters
- allowed_range
Tuple of (
min_value
,max_value
) defining the range of allowed values.- allowed_values
Set of allowed values for the number of neighbors.
- Returns
- None
-
property
weighting_scheme
¶ Get the weighting scheme for the KNearestNeighborsClassifier model.
- Returns
- The weighting scheme.
-
neural_network¶
pipeline¶
Pipeline utils for this package.
-
class
coremltools.models.pipeline.
Pipeline
(input_features, output_features, training_features=None)¶ A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs.
This class is the base class for
PipelineClassifier
andPipelineRegressor
, which contain a sequence ending in a classifier or regressor and themselves behave like a classifier or regressor. This class may be used directly for a sequence of feature transformer objects.-
__init__
(input_features, output_features, training_features=None)¶ Create a pipeline of models to be executed sequentially.
- Parameters
- input_features: [list of 2-tuples]
Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the
models.datatypes
module.- output_features: [list of features]
Name(s) of the output features, given as a list of (‘name’,datatype) tuples. The datatypes entry can be any of the data types defined in the
models.datatypes
module. All features must be either defined in the inputs or be produced by one of the contained models.
-
add_model
(spec)¶ Add a protobuf spec or
models.MLModel
instance to the pipeline.All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.
- Parameters
- spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model.
-
set_training_input
(training_input)¶ Set the training inputs of the network spec.
- Parameters
- training_input: [tuple]
List of training input names and type of the network.
-
-
class
coremltools.models.pipeline.
PipelineClassifier
(input_features, class_labels, output_features=None, training_features=None)¶ A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs. In this case the pipeline itself behaves as a classification model by designating a discrete categorical output feature as its ‘predicted feature’.
-
__init__
(input_features, class_labels, output_features=None, training_features=None)¶ Create a set of pipeline models given a set of model specs. The last model in this list must be a classifier model.
- Parameters
- input_features: [list of 2-tuples]
Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the
models.datatypes
module.- class_labels: [list]
A list of string or integer class labels to use in making predictions. This list must match the class labels in the model outputting the categorical predictedFeatureName
- output_features: [list]
A string or a list of two strings specifying the names of the two output features, the first being a class label corresponding to the class with the highest predicted score, and the second being a dictionary mapping each class to its score. If output_features is a string, it specifies the predicted class label and the class scores is set to the default value of “classProbability.”
-
add_model
(spec)¶ Add a protobuf spec or
models.MLModel
instance to the pipeline.All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.
- Parameters
- spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model.
-
set_training_input
(training_input)¶ Set the training inputs of the network spec.
- Parameters
- training_input: [tuple]
List of training input names and type of the network.
-
-
class
coremltools.models.pipeline.
PipelineRegressor
(input_features, output_features, training_features=None)¶ A pipeline model that exposes a sequence of models as a single model, It requires a set of inputs, a sequence of other models and a set of outputs. In this case the pipeline itself behaves as a regression model by designating a real valued output feature as its ‘predicted feature’.
-
__init__
(input_features, output_features, training_features=None)¶ Create a set of pipeline models given a set of model specs. The final output model must be a regression model.
- Parameters
- input_features: [list of 2-tuples]
Name(s) of the input features, given as a list of (‘name’, datatype) tuples. The datatypes entry can be any of the data types defined in the
models.datatypes
module.- output_features: [list of features]
Name(s) of the output features, given as a list of (‘name’,datatype) tuples. The datatypes entry can be any of the data types defined in the
models.datatypes
module. All features must be either defined in the inputs or be produced by one of the contained models.
-
add_model
(spec)¶ Add a protobuf spec or
models.MLModel
instance to the pipeline.All input features of this model must either match the input_features of the pipeline, or match the outputs of a previous model.
- Parameters
- spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model.
-
set_training_input
(training_input)¶ Set the training inputs of the network spec.
- Parameters
- training_input: [tuple]
List of training input names and type of the network.
-
tree_ensemble¶
Tree ensemble builder class to construct CoreML models.
-
class
coremltools.models.tree_ensemble.
TreeEnsembleBase
¶ Base class for the tree ensemble builder class. This should be instantiated either through the
TreeEnsembleRegressor
orTreeEnsembleClassifier
classes.-
__init__
()¶ High level Python API to build a tree ensemble model for Core ML.
-
add_branch_node
(tree_id, node_id, feature_index, feature_value, branch_mode, true_child_id, false_child_id, relative_hit_rate=None, missing_value_tracks_true_child=False)¶ Add a branch node to the tree ensemble.
- Parameters
- tree_id: int
ID of the tree to add the node to.
- node_id: int
ID of the node within the tree.
- feature_index: int
Index of the feature in the input being split on.
- feature_value: double or int
The value used in the feature comparison determining the traversal direction from this node.
- branch_mode: str
Branch mode of the node, specifying the condition under which the node referenced by
true_child_id
is called next.Must be one of the following:
"BranchOnValueLessThanEqual"
. Traverse to nodetrue_child_id
ifinput[feature_index] <= feature_value
, andfalse_child_id
otherwise."BranchOnValueLessThan"
. Traverse to nodetrue_child_id
ifinput[feature_index] < feature_value
, andfalse_child_id
otherwise."BranchOnValueGreaterThanEqual"
. Traverse to nodetrue_child_id
ifinput[feature_index] >= feature_value
, andfalse_child_id
otherwise."BranchOnValueGreaterThan"
. Traverse to nodetrue_child_id
ifinput[feature_index] > feature_value
, andfalse_child_id
otherwise."BranchOnValueEqual"
. Traverse to nodetrue_child_id
ifinput[feature_index] == feature_value
, andfalse_child_id
otherwise."BranchOnValueNotEqual"
. Traverse to nodetrue_child_id
ifinput[feature_index] != feature_value
, andfalse_child_id
otherwise.
- true_child_id: int
ID of the child under the true condition of the split. An error will be raised at model validation if this does not match the
node_id
of a node instantiated byadd_branch_node
oradd_leaf_node
within thistree_id
.- false_child_id: int
ID of the child under the false condition of the split. An error will be raised at model validation if this does not match the
node_id
of a node instantiated byadd_branch_node
oradd_leaf_node
within thistree_id
.- relative_hit_rate: float [optional]
When the model is converted compiled by CoreML, this gives hints to Core ML about which node is more likely to be hit on evaluation, allowing for additional optimizations. The values can be on any scale, with the values between child nodes being compared relative to each other.
- missing_value_tracks_true_child: bool [optional]
If the training data contains NaN values or missing values, then this flag determines which direction a NaN value traverses.
-
add_leaf_node
(tree_id, node_id, values, relative_hit_rate=None)¶ Add a leaf node to the tree ensemble.
- Parameters
- tree_id: int
ID of the tree to add the node to.
- node_id: int
ID of the node within the tree.
- values: [float | int | list | dict]
Value(s) at the leaf node to add to the prediction when this node is activated. If the prediction dimension of the tree is 1, then the value is specified as a float or integer value.
For multidimensional predictions, the values can be a list of numbers with length matching the dimension of the predictions or a dictionary mapping index to value added to that dimension.
Note that the dimension of any tree must match the dimension given when
set_default_prediction_value()
is called.
-
set_default_prediction_value
(values)¶ Set the default prediction value(s).
The values given here form the base prediction value that the values at activated leaves are added to. If values is a scalar, then the output of the tree must also be 1 dimensional; otherwise, values must be a list with length matching the dimension of values in the tree.
- Parameters
- values: [int | double | list[double]]
Default values for predictions.
-
set_post_evaluation_transform
(value)¶ Set the post processing transform applied after the prediction value from the tree ensemble.
- Parameters
- value: str
A value denoting the transform applied. Possible values are:
"NoTransform"
(default). Do not apply a transform."Classification_SoftMax"
.Apply a softmax function to the outcome to produce normalized, non-negative scores that sum to 1. The transformation applied to dimension i is equivalent to:
\[\frac{e^{x_i}}{\sum_j e^{x_j}}\]Note: This is the output transformation applied by the XGBoost package with multiclass classification.
"Regression_Logistic"
.Applies a logistic transform the predicted value, specifically:
\[(1 + e^{-v})^{-1}\]This is the transformation used in binary classification.
-
-
class
coremltools.models.tree_ensemble.
TreeEnsembleClassifier
(features, class_labels, output_features)¶ Tree Ensemble builder class to construct a Tree Ensemble classification model.
The TreeEnsembleClassifier class constructs a Tree Ensemble model incrementally using methods to add branch and leaf nodes specifying the behavior of the model.
Examples
>>> input_features = [("a", datatypes.Array(3)), ("b", datatypes.Double())] >>> tm = TreeEnsembleClassifier(features = input_features, class_labels = [0, 1], output_features = "predicted_class") >>> # Split on a[2] <= 3 >>> tm.add_branch_node(0, 0, 2, 3, "BranchOnValueLessThanEqual", 1, 2) >>> # Add leaf to the true branch of node 0 that subtracts 1. >>> tm.add_leaf_node(0, 1, -1) >>> # Add split on b == 0 to the false branch of node 0. >>> tm.add_branch_node(0, 2, 3, 0, "BranchOnValueEqual", 3, 4) >>> # Add leaf to the true branch of node 2 that adds 1 to the result. >>> tm.add_leaf_node(0, 3, 1) >>> # Add leaf to the false branch of node 2 that subtracts 1 from the result. >>> tm.add_leaf_node(0, 4, -1) >>> # Put in a softmax transform to translate these into probabilities. >>> tm.set_post_evaluation_transform("Classification_SoftMax") >>> tm.set_default_prediction_value([0, 0]) >>> # save the model to a .mlmodel file >>> model_path = './tree.mlmodel' >>> coremltools.models.utils.save_spec(tm.spec, model_path) >>> # load the .mlmodel >>> mlmodel = coremltools.models.MLModel(model_path) >>> # make predictions >>> test_input = { >>> 'a': np.array([0, 1, 2]).astype(np.float32), >>> "b": 3.0, >>> } >>> predictions = mlmodel.predict(test_input)
-
__init__
(features, class_labels, output_features)¶ Create a tree ensemble classifier model.
- Parameters
- features: [list of features]
Name(s) of the input features, given as a list of
('name', datatype)
tuples. The features are one ofmodels.datatypes.Int64
,datatypes.Double
, ormodels.datatypes.Array
. Feature indices in the nodes are counted sequentially from 0 through the features.- class_labels: [list]
A list of string or integer class labels to use in making predictions. The length of this must match the dimension of the tree model.
- output_features: [list]
A string or a list of two strings specifying the names of the two output features, the first being a class label corresponding to the class with the highest predicted score, and the second being a dictionary mapping each class to its score. If
output_features
is a string, it specifies the predicted class label and the class scores is set to the default value of"classProbability"
.
-
-
class
coremltools.models.tree_ensemble.
TreeEnsembleRegressor
(features, target)¶ Tree Ensemble builder class to construct a Tree Ensemble regression model.
The TreeEnsembleRegressor class constructs a Tree Ensemble model incrementally using methods to add branch and leaf nodes specifying the behavior of the model.
Examples
>>> # Required inputs >>> import coremltools >>> from coremltools.models import datatypes >>> from coremltools.models.tree_ensemble import TreeEnsembleRegressor >>> import numpy as np >>> # Define input features >>> input_features = [("a", datatypes.Array(3)), ("b", (datatypes.Double()))] >>> # Define output_features >>> output_features = [("predicted_values", datatypes.Double())] >>> tm = TreeEnsembleRegressor(features = input_features, target = output_features) >>> # Split on a[2] <= 3 >>> tm.add_branch_node(0, 0, 2, 3, "BranchOnValueLessThanEqual", 1, 2) >>> # Add leaf to the true branch of node 0 that subtracts 1. >>> tm.add_leaf_node(0, 1, -1) >>> # Add split on b == 0 to the false branch of node 0, which is index 3 >>> tm.add_branch_node(0, 2, 3, 0, "BranchOnValueEqual", 3, 4) >>> # Add leaf to the true branch of node 2 that adds 1 to the result. >>> tm.add_leaf_node(0, 3, 1) >>> # Add leaf to the false branch of node 2 that subtracts 1 from the result. >>> tm.add_leaf_node(0, 4, -1) >>> tm.set_default_prediction_value([0, 0]) >>> # save the model to a .mlmodel file >>> model_path = './tree.mlmodel' >>> coremltools.models.utils.save_spec(tm.spec, model_path) >>> # load the .mlmodel >>> mlmodel = coremltools.models.MLModel(model_path) >>> # make predictions >>> test_input = { >>> 'a': np.array([0, 1, 2]).astype(np.float32), >>> "b": 3.0, >>> } >>> predictions = mlmodel.predict(test_input)
-
__init__
(features, target)¶ Create a Tree Ensemble regression model that takes one or more input features and maps them to an output feature.
- Parameters
- features: [list of features]
Name(s) of the input features, given as a list of
('name', datatype)
tuples. The features are one ofmodels.datatypes.Int64
,datatypes.Double
, ormodels.datatypes.Array
. Feature indices in the nodes are counted sequentially from 0 through the features.- target: (default = None)
Name of the target feature predicted.
-
utils¶
Utilities for the entire package.
-
coremltools.models.utils.
_convert_neural_network_weights_to_fp16
(full_precision_model)¶ Utility function to convert a full precision (float) MLModel to a half precision MLModel (float16).
- Parameters
- full_precision_model: MLModel
Model which will be converted to half precision. Currently conversion for only neural network models is supported. If a pipeline model is passed in then all embedded neural network models embedded within will be converted.
- Returns
- model: MLModel
The converted half precision MLModel
-
coremltools.models.utils.
_element_equal
(x, y)¶ Performs a robust equality test between elements.
-
coremltools.models.utils.
_get_custom_layer_names
(spec)¶ Returns a list of className fields which appear in the given protobuf spec
- Parameters
- spec: mlmodel spec
- Returns
- set(str) A set of unique className fields of custom layers that appear in the model.
-
coremltools.models.utils.
_get_custom_layers
(spec)¶ Returns a list of all neural network custom layers in the spec.
- Parameters
- spec: mlmodel spec
- Returns
- [NN layer] A list of custom layer implementations
-
coremltools.models.utils.
_get_input_names
(spec)¶ Returns a list of the names of the inputs to this model. :param spec: The model protobuf specification :return: list of str A list of input feature names
-
coremltools.models.utils.
_get_model
(spec)¶ Utility to get the model and the data.
-
coremltools.models.utils.
_get_nn_layers
(spec)¶ Returns a list of neural network layers if the model contains any.
- Parameters
- spec: Model_pb
A model protobuf specification.
- Returns
- [NN layer]
list of all layers (including layers from elements of a pipeline
-
coremltools.models.utils.
_has_custom_layer
(spec)¶ Returns true if the given protobuf specification has a custom layer, and false otherwise.
- Parameters
- spec: mlmodel spec
- Returns
- True if the protobuf specification contains a neural network with a custom layer, False otherwise.
-
coremltools.models.utils.
_is_macos
()¶ Returns True if current platform is MacOS, False otherwise.
-
coremltools.models.utils.
_macos_version
()¶ Returns macOS version as a tuple of integers, making it easy to do proper version comparisons. On non-Macs, it returns an empty tuple.
-
coremltools.models.utils.
_python_version
()¶ Return python version as a tuple of integers
-
coremltools.models.utils.
_replace_custom_layer_name
(spec, oldname, newname)¶ Substitutes newname for oldname in the className field of custom layers. If there are no custom layers, or no layers with className=oldname, then the spec is unchanged.
- Parameters
- spec: mlmodel spec
- oldname: str The custom layer className to be replaced.
- newname: str The new className value to replace oldname
- Returns
- An mlmodel spec.
-
coremltools.models.utils.
_sanitize_value
(x)¶ Performs cleaning steps on the data so various type comparisons can be performed correctly.
-
coremltools.models.utils.
convert_double_to_float_multiarray_type
(spec)¶ Convert all double multiarrays feature descriptions (input, output, training input) to float multiarrays
- Parameters
- spec: Model_pb
The specification containing the multiarrays types to convert
Examples
# In-place convert multiarray type of spec >>> spec = mlmodel.get_spec() >>> coremltools.utils.convert_double_to_float_multiarray_type(spec) >>> model = coremltools.models.MLModel(spec)
-
coremltools.models.utils.
evaluate_classifier
(model, data, target='target', verbose=False)¶ Evaluate a Core ML classifier model and compare against predictions from the original framework (for testing correctness of conversion). Use this evaluation for models that don’t deal with probabilities.
- Parameters
- filename: list of str or list of MLModel
File from where to load the model from (OR) a loaded version of the MLModel.
- data: list of str or list of Dataframe
Test data on which to evaluate the models (dataframe, or path to a csv file).
- target: str
Column to interpret as the target column
- verbose: bool
Set to true for a more verbose output.
Examples
>>> metrics = coremltools.utils.evaluate_classifier(spec, 'data_and_predictions.csv', 'target') >>> print(metrics) {"samples": 10, num_errors: 0}
-
coremltools.models.utils.
evaluate_classifier_with_probabilities
(model, data, probabilities='probabilities', verbose=False)¶ Evaluate a classifier specification for testing.
- Parameters
- filename: [str | Model]
File from where to load the model from (OR) a loaded version of the MLModel.
- data: [str | Dataframe]
Test data on which to evaluate the models (dataframe, or path to a csv file).
- probabilities: str
Column to interpret as the probabilities column
- verbose: bool
Verbosity levels of the predictions.
-
coremltools.models.utils.
evaluate_regressor
(model, data, target='target', verbose=False)¶ Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion).
- Parameters
- model: MLModel or str
A loaded MLModel or a path to a saved MLModel
- data: Dataframe
Test data on which to evaluate the models
- target: str
Name of the column in the dataframe that must be interpreted as the target column.
- verbose: bool
Set to true for a more verbose output.
See also
Examples
>>> metrics = coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target') >>> print(metrics) {"samples": 10, "rmse": 0.0, max_error: 0.0}
-
coremltools.models.utils.
evaluate_transformer
(model, input_data, reference_output, verbose=False)¶ Evaluate a transformer specification for testing.
- Parameters
- spec: list of str or list of MLModel
File from where to load the Model from (OR) a loaded version of MLModel.
- input_data: list of dict
Test data on which to evaluate the models.
- reference_output: list of dict
Expected results for the model.
- verbose: bool
Verbosity levels of the predictions.
See also
Examples
>>> input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}] >>> expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}] >>> metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)
-
coremltools.models.utils.
load_spec
(filename)¶ Load a protobuf model specification from file.
- Parameters
- filename: str
Location on disk (a valid file path) from which the file is loaded as a protobuf spec.
- Returns
- model_spec: Model_pb
Protobuf representation of the model
See also
Examples
>>> spec = coremltools.utils.load_spec('HousePricer.mlmodel')
-
coremltools.models.utils.
rename_feature
(spec, current_name, new_name, rename_inputs=True, rename_outputs=True)¶ Rename a feature in the specification.
- Parameters
- spec: Model_pb
The specification containing the feature to rename.
- current_name: str
Current name of the feature. If this feature doesn’t exist, the rename is a no-op.
- new_name: str
New name of the feature.
- rename_inputs: bool
Search for current_name only in the input features (i.e ignore output features)
- rename_outputs: bool
Search for current_name only in the output features (i.e ignore input features)
Examples
# In-place rename of spec >>> coremltools.utils.rename_feature(spec, 'old_feature', 'new_feature_name')
-
coremltools.models.utils.
save_spec
(spec, filename, auto_set_specification_version=False)¶ Save a protobuf model specification to file.
- Parameters
- spec: Model_pb
Protobuf representation of the model
- filename: str
File path where the spec gets saved.
- auto_set_specification_version: bool
If true, will always try to set specification version automatically.
See also
Examples
>>> coremltools.utils.save_spec(spec, 'HousePricer.mlmodel')