coremltools.models.neural_network

Neural network builder class to construct Core ML models.

Functions

set_transform_interface_params(spec, ...[, ...]) Common utilities to set transform interface params.

Classes

NeuralNetworkBuilder(input_features, ...[, mode]) Neural network builder class to construct Core ML models.
class coremltools.models.neural_network.NeuralNetworkBuilder(input_features, output_features, mode=None)

Neural network builder class to construct Core ML models.

The NeuralNetworkBuilder constructs a Core ML neural network specification layer by layer. The layers should be added in such an order that the inputs to each layer (refered to as blobs) of each layer has been previously defined. The builder can also set pre-processing steps to handle specialized input format (e.g. images), and set class labels for neural network classifiers.

Please see the Core ML neural network protobuf message for more information on neural network layers, blobs, and parameters.

See also

MLModel, datatypes, save_spec

Examples

# Create a neural network binary classifier that classifies 
# 3-dimensional data points
# Specify input and output dimensions
>>> input_dim = (3,)
>>> output_dim = (2,)

# Specify input and output features
>>> input_features = [('data', datatypes.Array(*input_dim)]
>>> output_features = [('probs', datatypes.Array(*output_dim))]

# Build a simple neural network with 1 inner product layer
>>> builder = NeuralNetworkBuilder(input_features, output_features)
>>> builder.add_inner_product(name = 'ip_layer', W = weights, b = bias, input_channels = 3, output_channels = 2,
... has_bias = True, input_name = 'data', output_name = 'probs')

# save the spec by the builder
>>> save_spec(builder.spec, 'network.mlmodel')
__init__(input_features, output_features, mode=None)

Construct a NeuralNetworkBuilder object and set protobuf specification interface.

Parameters:

input_features: [(str, datatypes.Array)]

List of input feature of the network. Each feature is a (name, array) tuple, where name the name of the feature, and array is an datatypes.Array object describing the feature type.

output_features: [(str, datatypes.Array or None)]

List of output feature of the network. Each feature is a (name, array) tuple, where name is the name of the feature, and array is an datatypes.Array object describing the feature type. array can be None if not known.

mode: str (‘classifier’, ‘regressor’ or None)

Mode (one of ‘classifier’, ‘regressor’, or None).

When mode = ‘classifier’, a NeuralNetworkClassifier spec will be constructed. When mode = ‘regressor’, a NeuralNetworkRegressor spec will be constructed.

Examples

# Construct a builder that builds a neural network classifier with a 299x299x3
# dimensional input and 1000 dimensional output
>>> input_features = [('data', datatypes.Array((299,299,3)))]
>>> output_features = [('probs', datatypes.Array((1000,)))]
>>> builder = NeuralNetworkBuilder(input_features, output_features, mode='classifier')
add_activation(name, non_linearity, input_name, output_name, params=None)

Add an activation layer to the model.

Parameters:

name: str

The name of this layer

non_linearity: str

The non_linearity (activation) function of this layer. It can be one of the following:

  • ‘RELU’: Rectified Linear Unit (ReLU) function.

  • ‘SIGMOID’: sigmoid function.

  • ‘TANH’: tanh function.

  • ‘SCALED_TANH’: scaled tanh function, defined as:

    f(x) = alpha * tanh(beta * x)

    where alpha and beta are constant scalars.

  • ‘SOFTPLUS’: softplus function.

  • ‘SOFTSIGN’: softsign function.

  • ‘SIGMOID_HARD’: hard sigmoid function, defined as:

    f(x) = min(max(alpha * x + beta, -1), 1)

    where alpha and beta are constant scalars.

  • ‘LEAKYRELU’: leaky relu function, defined as:

    f(x) = (x >= 0) * x + (x < 0) * alpha * x

    where alpha is a constant scalar.

  • ‘PRELU’: Parametric ReLU function, defined as:

    f(x) = (x >= 0) * x + (x < 0) * alpha * x

    where alpha is a multi-dimensional array of same size as x.

  • ‘ELU’: Exponential linear unit function, defined as:

    f(x) = (x >= 0) * x + (x < 0) * (alpha * exp(x) - 1)

    where alpha is a constant scalar.

  • ‘PARAMETRICSOFTPLUS’: Parametric softplus function, defined as:

    f(x) = alpha * log(1 + exp(beta * x))

    where alpha and beta are two multi-dimensional arrays of same size as x.

  • ‘THRESHOLDEDRELU’: Thresholded ReLU function, defined as:

    f(x) = (x >= alpha) * x

    where alpha is a constant scalar.

  • ‘LINEAR’: linear function.

    f(x) = alpha * x + beta

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

params: [float] | [numpy.array]

Parameters for the activation, depending on non_linearity. Kindly refer to NeuralNetwork.proto for details.

  • When non_linearity is one of [‘RELU’, ‘SIGMOID’, ‘TANH’, ‘SCALED_TANH’, ‘SOFTPLUS’, ‘SOFTSIGN’], params is ignored.
  • When non_linearity is one of [‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’], param is a list of 2 floats [alpha, beta].
  • When non_linearity is one of [‘LEAKYRELU’, ‘ELU’, ‘THRESHOLDEDRELU’], param is a list of 1 float [alpha].
  • When non_linearity is ‘PRELU’, param is a list of 1 numpy array [alpha]. The shape of alpha is (C,), where C is either the number of input channels or 1. When C = 1, same alpha is applied to all channels.
  • When non_linearity is ‘PARAMETRICSOFTPLUS’, param is a list of 2 numpy arrays [alpha, beta]. The shape of alpha and beta is (C, ), where C is either the number of input channels or 1. When C = 1, same alpha and beta are applied to all channels.
add_batchnorm(name, channels, gamma, beta, mean=None, variance=None, input_name='data', output_name='out', compute_mean_var=False, instance_normalization=False, epsilon=1e-05)

Add a Batch Normalization layer. Batch Normalization operation is defined as:

y = gamma * (x - mean) / sqrt(variance + epsilon) + beta Parameters

name: str
The name of this layer.
channels: int
Number of channels of the input blob.
gamma: numpy.array
Values of gamma. Must be numpy array of shape (channels, ).
beta: numpy.array
Values of beta. Must be numpy array of shape (channels, ).
mean: numpy.array
Means of the input blob on each channel. Must be numpy array of shape (channels, ).
variance:
Variances of the input blob on each channel. Must be numpy array of shape (channels, ).
input_name: str
The input blob name of this layer.
output_name: str
The output blob name of this layer.
compute_mean_var: bool
Set to True if mean and variance is to be computed from the input data.
instance_normalization: bool
Set compute_mean_var and this to True to perform instance normalization i.e., mean and variance are computed from the single input instance.
epsilon: float
Value of epsilon. Defaults to 1e-5 if not specified.
add_bias(name, b, input_name, output_name, shape_bias=[1])

Add bias layer to the model.

Parameters:

name: str

The name of this layer.

b: int | numpy.array

Bias to add to the input.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

shape_bias: [int]

List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W].

See also

add_scale

add_bidirlstm(name, W_h, W_x, b, W_h_back, W_x_back, b_back, hidden_size, input_size, input_names, output_names, inner_activation='SIGMOID', cell_state_update_activation='TANH', output_activation='TANH', peep=None, peep_back=None, output_all=False, forget_bias=False, coupled_input_forget_gate=False, cell_clip_threshold=50000.0)

Add a Bi-directional LSTM layer to the model.

Parameters:

name: str

The name of this layer.

W_h: [numpy.array]

List of recursion weight matrices for the forward layer. The ordering is [R_i, R_f, R_z, R_o], where R_i, R_f, R_z, R_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, hidden_size).

W_x: [numpy.array]

List of input weight matrices for the forward layer. The ordering is [W_i, W_f, W_z, W_o], where W_i, W_f, W_z, W_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, input_size).

b: [numpy.array]

List of biases for the forward layer. The ordering is [b_i, b_f, b_z, b_o], where b_i, b_f, b_z, b_o are biases at input gate, forget gate, cell gate and output gate. If None, biases are ignored. Otherwise the shapes of the biases are (hidden_size, ).

W_h_back: [numpy.array]

List of recursion weight matrices for the backward layer. The ordering is [R_i, R_f, R_z, R_o], where R_i, R_f, R_z, R_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, hidden_size).

W_x_back: [numpy.array]

List of input weight matrices for the backward layer. The ordering is [W_i, W_f, W_z, W_o], where W_i, W_f, W_z, W_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, input_size).

b_back: [numpy.array]

List of biases for the backward layer. The ordering is [b_i, b_f, b_z, b_o], where b_i, b_f, b_z, b_o are biases at input gate, forget gate, cell gate and output gate. The shapes of the biases (hidden_size).

hidden_size: int

Number of hidden units. This is equal to the number of channels of output shape.

input_size: int

Number of the number of channels of input shape.

input_names: [str]

The input blob name list of this layer, in the order of [x, h_input, c_input, h_reverse_input, c_reverse_input].

output_names: [str]

The output blob name list of this layer, in the order of [y, h_output, c_output, h_reverse_output, c_reverse_output].

inner_activation: str

Inner activation function used at input and forget gate. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. Defaults to ‘SIGMOID’.

cell_state_update_activation: str

Cell state update activation function used at the cell state update gate. [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. Defaults to ‘TANH’.

output_activation: str

Activation function used at the output gate. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. Defaults to ‘TANH’.

peep: [numpy.array] | None

List of peephole vectors for the forward layer. The ordering is [p_i, p_f, p_o], where p_i, p_f, and p_o are peephole vectors at input gate, forget gate, output gate. The shapes of the peephole vectors are (hidden_size,). Defaults to None.

peep_back: [numpy.array] | None

List of peephole vectors for the backward layer. The ordering is [p_i, p_f, p_o], where p_i, p_f, and p_o are peephole vectors at input gate, forget gate, output gate. The shapes of the peephole vectors are (hidden_size,). Defaults to None.

output_all: boolean

Whether the LSTM layer should output at every time step. Defaults to False.

  • If False, the output is the result after the final state update.
  • If True, the output is a sequence, containing outputs at all time steps.

forget_bias: boolean

If True, a vector of 1s is added to forget gate bias. Defaults to False.

coupled_input_forget_gate : boolean

If True, the inpute gate and forget gate is coupled. i.e. forget gate is not used. Defaults to False.

cell_clip_threshold : float

The limit on the maximum and minimum values on the cell state. Defaults to 50.0.

add_convolution(name, kernel_channels, output_channels, height, width, stride_height, stride_width, border_mode, groups, W, b, has_bias, is_deconv=False, output_shape=None, input_name='data', output_name='out', dilation_factors=[1, 1], padding_top=0, padding_bottom=0, padding_left=0, padding_right=0, same_padding_asymmetry_mode='BOTTOM_RIGHT_HEAVY')

Add a convolution layer to the network.

Please see the ConvolutionLayerParams in Core ML neural network protobuf message for more information about input and output blob dimensions.

Parameters:

name: str

The name of this layer.

kernel_channels: int

Number of channels for the convolution kernels.

output_channels: int

Number of filter kernels. This is equal to the number of channels in the output blob.

height: int

Height of each kernel.

width: int

Width of each kernel.

stride_height: int

Stride along the height direction.

stride_width: int

Stride along the height direction.

border_mode: str

Option for the padding type and output blob shape. Can be either ‘valid’ or ‘same’. Kindly refer to NeuralNetwork.proto for details.

groups: int

Number of kernel groups. Input is divided into groups along the channel axis. Each kernel group share the same weights.

W: numpy.array

Weights of the convolution kernels.

  • If is_deconv is False, W should have shape (height, width, kernel_channels, output_channels), where kernel_channel = input_channels / groups
  • If is_deconv is True, W should have shape (height, width, kernel_channels, output_channels / groups), where kernel_channel = input_channels

b: numpy.array

Biases of the convolution kernels. b should have shape (outputChannels, ).

has_bias: boolean

Whether bias is ignored.

  • If True, bias is not ignored.
  • If False, bias is ignored.

is_deconv: boolean

Whether the convolution layer is performing a convolution or a transposed convolution (deconvolution).

  • If True, the convolution layer is performing transposed convolution.
  • If False, the convolution layer is performing regular convolution.

output_shape: tuple | None

Either None or a 2-tuple, specifying the output shape (output_height, output_width). Used only when is_deconv == True. When is_deconv == False, this parameter is ignored. If it is None, the output shape is calculated automatically using the border_mode. Kindly refer to NeuralNetwork.proto for details.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

dilation_factors: [int]

Dilation factors across height and width directions. Must be a list of two positive integers. Defaults to [1,1]

padding_top, padding_bottom, padding_left, padding_right: int

values of height (top, bottom) and width (left, right) padding to be used if border_more is “valid”.

same_padding_asymmetry_mode : str.

Type of asymmetric padding to be used when border_mode is ‘same’. Can be either ‘BOTTOM_RIGHT_HEAVY’ or ‘TOP_LEFT_HEAVY’. Kindly refer to NeuralNetwork.proto for details.

Depthwise convolution is a special case of convolution, where we have:

kernel_channels = 1 (== input_channels / groups) output_channels = channel_multiplier * input_channels groups = input_channels W : [Kernel_height, Kernel_width, 1, channel_multiplier * input_channels]

add_crop(name, left, right, top, bottom, offset, input_names, output_name)

Add a cropping layer to the model. The cropping layer have two functional modes:

  • When it has 1 input blob, it crops the input blob based on the 4 parameters [left, right, top, bottom].
  • When it has 2 input blobs, it crops the first input blob based on the dimension of the second blob with an offset.
Parameters:

name: str

The name of this layer.

left: int

Number of elements to be cropped on the left side of the input blob. When the crop layer takes 2 inputs, this parameter is ignored.

right: int

Number of elements to be cropped on the right side of the input blob. When the crop layer takes 2 inputs, this parameter is ignored.

top: int

Number of elements to be cropped on the top of the input blob. When the crop layer takes 2 inputs, this parameter is ignored.

bottom: int

Number of elements to be cropped on the bottom of the input blob. When the crop layer takes 2 inputs, this parameter is ignored.

offset: [int]

Offset along the height and width directions when the crop layer takes 2 inputs. Must be a list of length 2. When the crop layer takes 1 input, this parameter is ignored.

input_names: [str]

The input blob name(s) of this layer. Must be either a list of 1 string (1 input crop layer), or a list of 2 strings (2-input crop layer).

output_name: str

The output blob name of this layer.

add_custom(name, input_names, output_names, custom_proto_spec=None)

Add a custom layer.

Parameters:

name: str

The name of this layer.

input_names: [str]

The input blob names to this layer.

output_names: [str]

The output blob names from this layer.

custom_proto_spec: CustomLayerParams

A protobuf CustomLayerParams message. This can also be left blank and filled in later.

add_elementwise(name, input_names, output_name, mode, alpha=None)

Add an element-wise operation layer to the model.

Parameters:

The name of this layer

name: str

input_names: [str]

A list of input blob names of this layer. The input blobs should have the same shape.

output_name: str

The output blob name of this layer.

mode: str

A string specifying the mode of the elementwise layer. It can be one of the following:

  • ‘CONCAT’: concatenate input blobs along the channel axis.
  • ‘SEQUENCE_CONCAT’: concatenate input blobs along the sequence axis.
  • ‘ADD’: perform an element-wise summation over the input blobs.
  • ‘MULTIPLY’: perform an element-wise multiplication over the input blobs.
  • ‘DOT’: compute the dot product of the two input blobs. In this mode, the length of input_names should be 2.
  • ‘COS’: compute the cosine similarity of the two input blobs. In this mode, the length of input_names should be 2.
  • ‘MAX’: compute the element-wise maximum over the input blobs.
  • ‘MIN’: compute the element-wise minimum over the input blobs.
  • ‘AVE’: compute the element-wise average over the input blobs.

alpha: float

if mode == ‘ADD’ and there is only one input_name, alpha is added to the input if mode == ‘MULTIPLY’ and there is only one input_name, alpha is multiplied to the input

add_embedding(name, W, b, input_dim, output_channels, has_bias, input_name, output_name)

Add an embedding layer to the model.

Parameters:

name: str

The name of this layer

W: numpy.array

Weight matrix of shape (output_channels, input_dim).

b: numpy.array

Bias vector of shape (output_channels, ).

input_dim: int

Size of the vocabulary (1 + maximum integer index of the words).

output_channels: int

Number of output channels.

has_bias: boolean

Whether the bias vector of this layer is ignored in the spec.

  • If True, the bias vector of this layer is not ignored.
  • If False, the bias vector is ignored.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_flatten(name, mode, input_name, output_name)

Add a flatten layer. Only flattens the channel, height and width axis. Leaves the sequence axis as is.

Parameters:

name: str

The name of this layer.

mode: int

  • If mode == 0, the flatten layer is in CHANNEL_FIRST mode.
  • If mode == 1, the flatten layer is in CHANNEL_LAST mode.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_gru(name, W_h, W_x, b, hidden_size, input_size, input_names, output_names, activation='TANH', inner_activation='SIGMOID_HARD', output_all=False, reverse_input=False)

Add a Gated-Recurrent Unit (GRU) layer to the model.

Parameters:

name: str

The name of this layer.

W_h: [numpy.array]

List of recursion weight matrices. The ordering is [R_z, R_r, R_o], where R_z, R_r and R_o are weight matrices at update gate, reset gate and output gate. The shapes of these matrices are (hidden_size, hidden_size).

W_x: [numpy.array]

List of input weight matrices. The ordering is [W_z, W_r, W_o], where W_z, W_r, and W_o are weight matrices at update gate, reset gate and output gate. The shapes of these matrices are (hidden_size, input_size).

b: [numpy.array] | None

List of biases of the GRU layer. The ordering is [b_z, b_r, b_o], where b_z, b_r, b_o are biases at update gate, reset gate and output gate. If None, biases are ignored. Otherwise the shapes of the biases are (hidden_size, ).

hidden_size: int

Number of hidden units. This is equal to the number of channels of output shape.

input_size: int

Number of the number of channels of input shape.

activation: str

Activation function used at the output gate. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. Defaults to ‘TANH’. See add_activation for more detailed description.

inner_activation: str

Inner activation function used at update and reset gates. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. Defaults to ‘SIGMOID_HARD’. See add_activation for more detailed description.

input_names: [str]

The input blob name list of this layer, in the order of [x, h_input].

output_names: [str]

The output blob name list of this layer, in the order of [y, h_output].

output_all: boolean

Whether the recurrent layer should output at every time step.

  • If False, the output is the result after the final state update.
  • If True, the output is a sequence, containing outputs at all time steps.

reverse_input: boolean

Whether the recurrent layer should process the input sequence in the reverse order.

  • If False, the input sequence order is not reversed.
  • If True, the input sequence order is reversed.
add_inner_product(name, W, b, input_channels, output_channels, has_bias, input_name, output_name)

Add an inner product layer to the model.

Parameters:

name: str

The name of this layer

W: numpy.array

Weight matrix of shape (output_channels, input_channels).

b: numpy.array

Bias vector of shape (output_channels, ).

input_channels: int

Number of input channels.

output_channels: int

Number of output channels.

has_bias: boolean

Whether the bias vector of this layer is ignored in the spec.

  • If True, the bias vector of this layer is not ignored.
  • If False, the bias vector is ignored.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_l2_normalize(name, input_name, output_name, epsilon=1e-05)

Add L2 normalize layer. Normalizes the input by the L2 norm, i.e. divides by the the square root of the sum of squares of all elements of the input along C, H and W dimensions.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

epsilon: float

small bias to avoid division by zero.

See also

add_mvn, add_lrn

add_load_constant(name, output_name, constant_value, shape)

Add a load constant layer.

Parameters:

name: str

The name of this layer.

output_name: str

The output blob name of this layer.

constant_value: numpy.array

value of the constant as a numpy array.

shape: [int]

List of ints representing the shape of the constant. Must be of length 3: [C,H,W]

See also

add_elementwise

add_lrn(name, input_name, output_name, alpha, beta, local_size, k=1.0)

Add a LRN (local response normalization) layer. Please see the LRNLayerParams message in Core ML neural network protobuf for more information about the operation of this layer. Supports “across” channels normalization.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

alpha: float

multiplicative constant in the denominator.

beta: float

exponent of the normalizing term in the denominator.

k: float

bias term in the denominator. Must be positive.

local_size: int

size of the neighborhood along the channel axis.

add_mvn(name, input_name, output_name, across_channels=True, normalize_variance=True, epsilon=1e-05)

Add an MVN (mean variance normalization) layer. Computes mean, variance and normalizes the input.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

across_channels: boolean

If False, each channel plane is normalized separately If True, mean/variance is computed across all C, H and W dimensions

normalize_variance: boolean

If False, only mean subtraction is performed.

epsilon: float

small bias to avoid division by zero.

add_optionals(optionals_in, optionals_out)

Add optional inputs and outputs to the model spec.

Parameters:

optionals_in: [str]

List of inputs that are optionals.

optionals_out: [str]

List of outputs that are optionals.

See also

set_input, set_output

add_padding(name, left=0, right=0, top=0, bottom=0, value=0, input_name='data', output_name='out', padding_type='constant')

Add a padding layer to the model. Kindly refer to NeuralNetwork.proto for details.

Parameters:

name: str

The name of this layer.

left: int

Number of elements to be padded on the left side of the input blob.

right: int

Number of elements to be padded on the right side of the input blob.

top: int

Number of elements to be padded on the top of the input blob.

bottom: int

Number of elements to be padded on the bottom of the input blob.

value: float

Value of the elements padded. Used only when padding_type = ‘constant’

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

padding_type: str

Type of the padding. Can be one of ‘constant’, ‘reflection’ or ‘replication’

add_permute(name, dim, input_name, output_name)

Add a permute layer. Assumes that the input has dimensions in the order [Seq, C, H, W]

Parameters:

name: str

The name of this layer.

dim: tuple

The order in which to permute the input dimensions = [seq,C,H,W]. Must have length 4 and a permutation of [0, 1, 2, 3].

examples:

Lets say input has shape: [seq, C, H, W].

If dim is set to [0, 3, 1, 2], then the output has shape [W,C,H] and has the same sequence length that of the input.

If dim is set to [3, 1, 2, 0], and the input is a sequence of data with length Seq and shape [C, 1, 1], then the output is a unit sequence of data with shape [C, 1, Seq].

If dim is set to [0, 3, 2, 1], the output is a reverse of the input: [C, H, W] -> [W, H, C].

If dim is not set, or is set to [0, 1, 2, 3], the output is the same as the input.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_pooling(name, height, width, stride_height, stride_width, layer_type, padding_type, input_name, output_name, exclude_pad_area=True, is_global=False, padding_top=0, padding_bottom=0, padding_left=0, padding_right=0, same_padding_asymmetry_mode='BOTTOM_RIGHT_HEAVY')

Add a pooling layer to the model.

Parameters:

name: str

The name of this layer.

height: int

Height of pooling region.

width: int

Width of pooling region.

stride_height: int

Stride along the height direction.

stride_width: int

Stride along the width direction.

layer_type: str

Type of pooling performed. Can either be ‘MAX’, ‘AVERAGE’ or ‘L2’.

padding_type: str

Option for the type of pading and output blob shape. Can be either ‘VALID’ , ‘SAME’ or ‘INCLUDE_LAST_PIXEL’. Kindly refer to NeuralNetwork.proto for details.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

exclude_pad_area: boolean

Whether to exclude padded area in the ‘AVERAGE’ pooling operation. Defaults to True.

  • If True, the value of the padded area will be excluded.
  • If False, the padded area will be included.

This flag is only used with average pooling.

is_global: boolean

Whether the pooling operation is global. Defaults to False.

  • If True, the pooling operation is global – the pooling region is of the same size of the input blob.

Parameters height, width, stride_height, stride_width will be ignored.

  • If False, the pooling operation is not global.

padding_top, padding_bottom, padding_left, padding_right: int

values of height (top, bottom) and width (left, right) padding to be used if padding type is “VALID” or “INCLUDE_LAST_PIXEL”.

same_padding_asymmetry_mode : str.

Type of asymmetric padding to be used when padding_type = ‘SAME’. Can be either ‘BOTTOM_RIGHT_HEAVY’ or ‘TOP_LEFT_HEAVY’. Kindly refer to NeuralNetwork.proto for details.

add_reduce(name, input_name, output_name, axis, mode, epsilon=1e-06)

Add a reduce layer. Applies the function specified by the parameter mode, along dimension(s) specified by the parameter axis.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

axis: str

dimensions along which the reduction operation is applied. Allowed values: ‘CHW’, ‘HW’, ‘C’, ‘H’, ‘W’

mode: str

Reduction operation to be applied. Allowed values: ‘sum’, ‘avg’, ‘prod’, ‘logsum’, ‘sumsquare’, ‘L1’, ‘L2’, ‘max’, ‘min’, ‘argmax’. ‘argmax’ is only suuported with axis values ‘C’, ‘H’ and ‘W’.

epsilon: float

number that is added to the input when ‘logsum’ function is applied.

See also

add_activation

add_reorganize_data(name, input_name, output_name, mode='SPACE_TO_DEPTH', block_size=2)

Add a data reorganization layer of type “SPACE_TO_DEPTH” or “DEPTH_TO_SPACE”.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

mode: str

  • If mode == ‘SPACE_TO_DEPTH’: data is moved from the spatial to the channel dimension. Input is spatially divided into non-overlapping blocks of size block_size X block_size and data from each block is moved to the channel dimension. Output CHW dimensions are: [C * block_size * block_size, H/block_size, C/block_size].
  • If mode == ‘DEPTH_TO_SPACE’: data is moved from the channel to the spatial dimension. Reverse of the operation ‘SPACE_TO_DEPTH’. Output CHW dimensions are: [C/(block_size * block_size), H * block_size, C * block_size].

block_size: int

Must be greater than 1. Must divide H and W, when mode is ‘SPACE_TO_DEPTH’. (block_size * block_size) must divide C when mode is ‘DEPTH_TO_SPACE’.

add_reshape(name, input_name, output_name, target_shape, mode)

Add a reshape layer. Kindly refer to NeuralNetwork.proto for details.

Parameters:

name: str

The name of this layer.

target_shape: tuple

Shape of the output blob. The product of target_shape must be equal to the shape of the input blob. Can be either length 3 (C,H,W) or length 4 (Seq,C,H,W).

mode: int

  • If mode == 0, the reshape layer is in CHANNEL_FIRST mode.
  • If mode == 1, the reshape layer is in CHANNEL_LAST mode.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_scale(name, W, b, has_bias, input_name, output_name, shape_scale=[1], shape_bias=[1])

Add scale layer to the model.

Parameters:

name: str

The name of this layer.

W: int | numpy.array

Scale of the input.

b: int | numpy.array

Bias to add to the input.

has_bias: boolean

Whether the bias vector of this layer is ignored in the spec.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

shape_scale: [int]

List of ints that specifies the shape of the scale parameter. Can be [1] or [C] or [1,H,W] or [C,H,W].

shape_bias: [int]

List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W].

See also

add_bias

add_sequence_repeat(name, nrep, input_name, output_name)

Add sequence repeat layer to the model.

Parameters:

name: str

The name of this layer.

nrep: int

Number of repetitions of the input blob along the sequence axis.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_simple_rnn(name, W_h, W_x, b, hidden_size, input_size, activation, input_names, output_names, output_all=False, reverse_input=False)

Add a simple recurrent layer to the model.

Parameters:

name: str

The name of this layer.

W_h: numpy.array

Weights of the recurrent layer’s hidden state. Must be of shape (hidden_size, hidden_size).

W_x: numpy.array

Weights of the recurrent layer’s input. Must be of shape (hidden_size, input_size).

b: numpy.array | None

Bias of the recurrent layer’s output. If None, bias is ignored. Otherwise it must be of shape (hidden_size, ).

hidden_size: int

Number of hidden units. This is equal to the number of channels of output shape.

input_size: int

Number of the number of channels of input shape.

activation: str

Activation function name. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’]. See add_activation for more detailed description.

input_names: [str]

The input blob name list of this layer, in the order of [x, h_input].

output_name: [str]

The output blob name list of this layer, in the order of [y, h_output].

output_all: boolean

Whether the recurrent layer should output at every time step.

  • If False, the output is the result after the final state update.
  • If True, the output is a sequence, containing outputs at all time steps.

reverse_input: boolean

Whether the recurrent layer should process the input sequence in the reverse order.

  • If False, the input sequence order is not reversed.
  • If True, the input sequence order is reversed.
add_slice(name, input_name, output_name, axis, start_index=0, end_index=-1, stride=1)

Add a slice layer. Equivalent to to numpy slice [start_index:end_index:stride], start_index is included, while end_index is exclusive.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

axis: str

axis along which input is sliced.

allowed values: ‘channel’, ‘height’, ‘width’

start_index: int

must be non-negative.

end_index: int

negative indexing is supported.

stride: int

must be positive.

add_softmax(name, input_name, output_name)

Add a softmax layer to the model.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

add_split(name, input_name, output_names)

Add a Split layer that uniformly splits the input along the channel dimension to produce multiple outputs.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_names: [str]

List of output blob names of this layer.

See also

add_elementwise

add_unary(name, input_name, output_name, mode, alpha=1.0, shift=0, scale=1.0, epsilon=1e-06)

Add a Unary layer. Applies the specified function (mode) to all the elements of the input. Please see the UnaryFunctionLayerParams message in Core ML neural network protobuf for more information about the operation of this layer. Prior to the application of the function the input can be scaled and shifted by using the ‘scale’, ‘shift’ parameters.

Parameters:

name: str

The name of this layer.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

mode: str

Unary function. Allowed values: ‘sqrt’, ‘rsqrt’, ‘inverse’, ‘power’, ‘exp’, ‘log’, ‘abs’, threshold’.

alpha: float

constant used in with modes ‘power’ and ‘threshold’.

shift, scale: float

input is modified by scale and shift prior to the application of the unary function.

epsilon: float

small bias to prevent division by zero.

See also

add_activation

add_unilstm(name, W_h, W_x, b, hidden_size, input_size, input_names, output_names, inner_activation='SIGMOID', cell_state_update_activation='TANH', output_activation='TANH', peep=None, output_all=False, forget_bias=False, coupled_input_forget_gate=False, cell_clip_threshold=50000.0, reverse_input=False)

Add a Uni-directional LSTM layer to the model.

Parameters:

name: str

The name of this layer.

W_h: [numpy.array]

List of recursion weight matrices. The ordering is [R_i, R_f, R_z, R_o], where R_i, R_f, R_z, R_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, hidden_size).

W_x: [numpy.array]

List of input weight matrices. The ordering is [W_i, W_f, W_z, W_o], where W_i, W_f, W_z, W_o are weight matrices at input gate, forget gate, cell gate and output gate. The shapes of these matrices are (hidden_size, input_size).

b: [numpy.array] | None

List of biases. The ordering is [b_i, b_f, b_z, b_o], where b_i, b_f, b_z, b_o are biases at input gate, forget gate, cell gate and output gate. If None, biases are ignored. Otherwise the shapes of the biases are (hidden_size, ).

hidden_size: int

Number of hidden units. This is equal to the number of channels of output shape.

input_size: int

Number of the number of channels of input shape.

input_names: [str]

The input blob name list of this layer, in the order of [x, h_input, c_input].

output_names: [str]

The output blob name list of this layer, in the order of [y, h_output, c_output].

inner_activation: str

Inner activation function used at input and forget gate. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’].

cell_state_update_activation: str

Cell state update activation function used at the cell state update gate. [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’].

output_activation: str

Activation function used at the output gate. Can be one of the following option: [‘RELU’, ‘TANH’, ‘SIGMOID’, ‘SCALED_TANH’, ‘SIGMOID_HARD’, ‘LINEAR’].

peep: [numpy.array] | None

List of peephole vectors. The ordering is [p_i, p_f, p_o], where p_i, p_f, and p_o are peephole vectors at input gate, forget gate, output gate. The shapes of the peephole vectors are (hidden_size,).

output_all: boolean

Whether the LSTM layer should output at every time step.

  • If False, the output is the result after the final state update.
  • If True, the output is a sequence, containing outputs at all time steps.

forget_bias: boolean

If True, a vector of 1s is added to forget gate bias.

coupled_input_forget_gate: boolean

If True, the inpute gate and forget gate is coupled. i.e. forget gate is not used.

cell_clip_threshold: float

The limit on the maximum and minimum values on the cell state. If not provided, it is defaulted to 50.0.

reverse_input: boolean

Whether the LSTM layer should process the input sequence in the reverse order.

  • If False, the input sequence order is not reversed.
  • If True, the input sequence order is reversed.
add_upsample(name, scaling_factor_h, scaling_factor_w, input_name, output_name, mode='NN')

Add upsample layer to the model.

Parameters:

name: str

The name of this layer.

scaling_factor_h: int

Scaling factor on the vertical direction.

scaling_factor_w: int

Scaling factor on the horizontal direction.

input_name: str

The input blob name of this layer.

output_name: str

The output blob name of this layer.

mode: str

Following values are supported: ‘NN’: nearest neighbour ‘BILINEAR’ : bilinear interpolation

set_class_labels(class_labels, predicted_feature_name='classLabel', prediction_blob='')

Set class labels to the model spec to make it a neural network classifier.

Parameters:

class_labels: list[int or str]

A list of integers or strings that map the index of the output of a neural network to labels in a classifier.

predicted_feature_name: str

Name of the output feature for the class labels exposed in the Core ML neural network classifier. Defaults to ‘class_output’.

prediction_blob: str

If provided, then this is the name of the neural network blob which generates the probabilities for each class label (typically the output of a softmax layer). If not provided, then the last output layer is assumed.

set_input(input_names, input_dims)

Set the inputs of the network spec.

Parameters:

input_names: [str]

List of input names of the network.

input_dims: [tuple]

List of input dimensions of the network. The ordering of input_dims is the same as input_names.

Examples

# Set the neural network spec inputs to be 3 dimensional vector data1 and
# 4 dimensional vector data2.
>>> builder.set_input(input_names = ['data1', 'data2'], [(3,), (4,)])
set_output(output_names, output_dims)

Set the outputs of the network spec.

Parameters:

output_names: [str]

List of output names of the network.

output_dims: [tuple]

List of output dimensions of the network. The ordering of output_dims is the same as output_names.

Examples

# Set the neural network spec outputs to be 3 dimensional vector feature1 and
# 4 dimensional vector feature2.
>>> builder.set_output(output_names = ['feature1', 'feature2'], [(3,), (4,)])
set_pre_processing_parameters(image_input_names=[], is_bgr=False, red_bias=0.0, green_bias=0.0, blue_bias=0.0, gray_bias=0.0, image_scale=1.0)

Add pre-processing parameters to the neural network object

Parameters:

image_input_names: [str]

Name if input blobs that are images

is_bgr: boolean | dict()

Image pixel order (RGB or BGR)

red_bias: float | dict()

Image re-centering parameter (red channel)

blue_bias: float | dict()

Image re-centering parameter (blue channel)

green_bias: float | dict()

Image re-centering parameter (green channel)

gray_bias: float | dict()

Image re-centering parameter (for grayscale images)

image_scale: float | dict()

Value by which to scale the images.