coremltools.models.MLModel

class coremltools.models.MLModel(model, useCPUOnly=False)

This class defines the minimal interface to a CoreML object in Python.

At a high level, the protobuf specification consists of:

  • Model description: Encodes names and type information of the inputs and outputs to the model.
  • Model parameters: The set of parameters required to represent a specific instance of the model.
  • Metadata: Information about the origin, license, and author of the model.

With this class, you can inspect a CoreML model, modify metadata, and make predictions for the purposes of testing (on select platforms).

See also

predict

Examples

# Load the model
>>> model =  MLModel('HousePricer.mlmodel')

# Set the model metadata
>>> model.author = 'Author'
>>> model.license = 'BSD'
>>> model.short_description = 'Predicts the price of a house in the Seattle area.'

# Get the interface to the model
>>> model.input_descriptions
>>> model.output_description

# Set feature descriptions manually
>>> model.input_description['bedroom'] = 'Number of bedrooms'
>>> model.input_description['bathrooms'] = 'Number of bathrooms'
>>> model.input_description['size'] = 'Size (in square feet)'

# Set
>>> model.output_description['price'] = 'Price of the house'

# Make predictions
>>> predictions = model.predict({'bedroom': 1.0, 'bath': 1.0, 'size': 1240})

# Get the spec of the model
>>> model.spec

# Save the model
>>> model.save('HousePricer.mlmodel')
__init__(self, model, useCPUOnly=False)

Construct an MLModel from a .mlmodel

Parameters:
model: str or Model_pb2

If a string is given it should be the location of the .mlmodel to load.

useCPUOnly: bool

Set to true to restrict loading of model on CPU Only. Defaults to False.

Examples

>>> loaded_model = MLModel('my_model_file.mlmodel')

Methods

__init__(self, model[, useCPUOnly]) Construct an MLModel from a .mlmodel
get_spec(self) Get a deep copy of the protobuf specification of the model.
predict(self, data[, useCPUOnly]) Return predictions for the model.
save(self, filename) Save the model to a .mlmodel format.
visualize_spec(\*args, \*\*kwargs) Visualize the model.

Attributes

author
input_description
license
output_description
short_description
user_defined_metadata
__init__(self, model, useCPUOnly=False)

Construct an MLModel from a .mlmodel

Parameters:
model: str or Model_pb2

If a string is given it should be the location of the .mlmodel to load.

useCPUOnly: bool

Set to true to restrict loading of model on CPU Only. Defaults to False.

Examples

>>> loaded_model = MLModel('my_model_file.mlmodel')
get_spec(self)

Get a deep copy of the protobuf specification of the model.

Returns:
model: Model_pb2

Protobuf specification of the model.

Examples

>>> spec = model.get_spec()
predict(self, data, useCPUOnly=False, **kwargs)

Return predictions for the model. The kwargs gets passed into the model as a dictionary.

Parameters:
data: dict[str, value]

Dictionary of data to make predictions from where the keys are the names of the input features.

useCPUOnly: bool

Set to true to restrict computation to use only the CPU. Defaults to False.

Returns:
out: dict[str, value]

Predictions as a dictionary where each key is the output feature name.

Examples

>>> data = {'bedroom': 1.0, 'bath': 1.0, 'size': 1240}
>>> predictions = model.predict(data)
save(self, filename)

Save the model to a .mlmodel format.

Parameters:
filename: str

Target filename for the model.

See also

coremltools.utils.load_model

Examples

>>> model.save('my_model_file.mlmodel')
>>> loaded_model = MLModel('my_model_file.mlmodel')
visualize_spec(*args, **kwargs)

Visualize the model.

Parameters:
port: int

if server is to be hosted on specific localhost port

input_shape_dict: dict

The shapes are calculated assuming the batch and sequence are 1 i.e. (1, 1, C, H, W). If either is not 1, then provide full input shape

title: str

Title for the visualized model

Returns:
None

Examples

>>> model = coreml.models.MLModel('HousePricer.mlmodel')
>>> model.visualize_spec()