New Features#
The following sections describe new features and improvements in the most recent versions of Core ML Tools.
New in Core ML Tools 7#
The coremltools 7 package now includes more APIs for optimizing the models to use less storage space, reduce power consumption, and reduce latency during inference. Key optimization techniques include pruning, quantization, and palettization.
You can either directly compress a Core ML model, or compress a model in the source framework during training and then convert. While the former is quicker and can happen without needing data, the latter can preserve accuracy better by fine-tuning with data. For details, see Optimizing Models.
For a full list of changes, see Release Notes. For installation instructions, see Installing Core ML Tools.
Previous Versions#
The coremltools 6 package offers the following features to optimize the model conversion process:
Model compression utilities; see Compressing Neural Network Weights.
Float 16 input/output types including image. See Image Input and Output.
For a full list of changes from coremltools
5.2, see Release Notes.
Release Notes#
Learn about changes to the coremltools
package from the following release notes:
For information about previous releases, see the following:
Migration Workflow#
If you used coremltools
3 for neural network model conversion from TensorFlow or ONNX/PyTorch to Core ML, update your workflow as follows when you upgrade to coremltools
4 and newer:
Conversion from |
coremltools 3 |
coremltools 4 and newer |
---|---|---|
TensorFlow |
Install coremltools 3.4 and tfcoreml 1.1 and use the |
Use the new |
PyTorch |
First export the PyTorch model to the ONNX format and then install coremltools 3.4 and onnx-coreml 1.3 and use the |
Use the new |
Convert from TensorFlow#
With coremltools
4 and newer versions, you do not need to install the tfcoreml package to convert TensorFlow models. The TensorFlow converter is fully integrated in coremltools and available in the Unified Conversion API.
For older deployment targets
To deploy the Core ML model to a target that is iOS 12, macOS 10.13, watchOS 5, tvOS 12, or an older version, use coremltools 3 and tfcoreml 1.
Convert from PyTorch#
You can directly convert from PyTorch using the newest version of coremltools, which includes a PyTorch converter available through the Unified Conversion API. You no longer need to use the two-step process for converting PyTorch models using the ONNX format.
For older deployment targets
To deploy the Core ML model to a target that is iOS 12, macOS 10.13, watchOS 5, tvOS 12, or an older version, use coremltools 3 and onnx-coreml 1
Deprecated Methods and Support#
In coremltools 4 and newer, the the following class and methods available in previous versions are deprecated:
convert_neural_network_weights_to_fp16()
,convert_neural_network_spec_weights_to_fp16()
, andquantize_spec_weights()
. Use thequantize_weights()
method instead. For instructions, see Quantization.The NeuralNetworkShaper class.
get_allowed_shape_ranges()
.can_allow_multiple_input_shapes()
.visualize_spec()
method of the MLModel class. You can use the netron open source viewer to visualize Core ML models.get_custom_layer_names()
,replace_custom_layer_name()
, andhas_custom_layer()
: These were moved to internal methods.Caffe converter
Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion using the Unified Conversion API.
Support for Python 2 has been deprecated since coremltools 4.1. The current version of coremltools includes wheels for Python 3.5, 3.6, 3.7, and 3.8.