Ops#
- class pfl.internal.ops.framework_types.MLFramework(value)#
An enumeration.
Selector#
Common ops#
- pfl.internal.ops.common_ops.all_reduce_metrics(metrics)#
Performs all reduce between workers on a Metrics object. If the current instance is not connected to a cluster, this will return the identity.
When one worker calls this method, it will block until all_reduce_metrics has been called on all other worker instances as well.
- Parameters:
metrics – A Metrics object that contains metrics from training with the data that was assigned for the local worker.
- Returns:
A Metrics object where all its elements has been summed across all workers. The returned Metrics now contains the same values for each worker.
- pfl.internal.ops.common_ops.get_tf_major_version()#
- Return type:
int
- Returns:
The major version of the TensorFlow package installed.
0
if TensorFlow is not installed.
- pfl.internal.ops.common_ops.get_pytorch_major_version()#
- Return type:
int
- Returns:
The major version of the PyTorch package installed.
0
if PyTorch is not installed.
- pfl.internal.ops.common_ops.check_pfl_tree_installed()#
- Return type:
bool
- Returns:
True if pfl is set up to train trees. False otherwise.
- pfl.internal.ops.common_ops.is_pytest_running()#
- Returns:
True if pytest is currently running.
Distributed ops#
- pfl.internal.ops.distributed.horovod_is_active()#
- Return type:
bool
- Returns:
True if program was called with horovodrun.
- class pfl.internal.ops.distributed.DistributedContext#
Collection of properties and methods related to distributed training.
- abstract property local_rank: int#
The rank of the current process over all processes on current machine.
- abstract property global_rank: int#
The rank of the current process over all processes on all machines.
- abstract property world_size: int#
The total number of processes over all machines.
- abstract property local_size: int#
The total number of processes on 1 machines.
- abstract all_reduce(tensors, average=False)#
Performs all reduce between processes on a list of tensors. When one process calls this method, it will block until all_reduce has been called on all other processes as well. Processes may be scattered across multiple workers.
- Parameters:
tensors (
List
[TypeVar
(Tensor
)]) – A list of tensors to reduce between processes.average (
bool
) – If False return sum, if True return the average.
- Return type:
List
[TypeVar
(Tensor
)]- Returns:
A list of tensors, representing the reduced versions of the input parameter tensors.
- distribute_range(value)#
Split range(value) among workers so that each workers gets a slice of approximately same length.
- Example:
An input of
5
when using 2 workers will returnrange(0,3)
for worker 1 andrange(3,5)
for worker 2.
- Parameters:
value (
int
) – The integer value to split.- Return type:
Iterable
- Returns:
The split value for the current worker.
- distribute_value(value)#
Split an integer
value
among workers. Parametervalue
is interpreted as the number of units of work. Each worker gets its own integer. The integers assigned to all workers sum tovalue
and are approximately equal.- Example:
An input of
5
when using 2 workers will return3
for worker 1 and2
for worker 2.
- Parameters:
value (
int
) – The integer value to split.- Return type:
int
- Returns:
The split value for the current worker.
- class pfl.internal.ops.distributed.HorovodDistributedContext(hvd)#
Base class for distributed training operations with Horovod.
- property local_rank: int#
The rank of the current process over all processes on current machine.
- property global_rank: int#
The rank of the current process over all processes on all machines.
- property world_size: int#
The total number of processes over all machines.
- property local_size: int#
The total number of processes on 1 machines.
- all_reduce(tensors, average=False)#
Performs all reduce between processes on a list of tensors. When one process calls this method, it will block until all_reduce has been called on all other processes as well. Processes may be scattered across multiple workers.
- Parameters:
tensors (
List
[TypeVar
(Tensor
)]) – A list of tensors to reduce between processes.average (
bool
) – If False return sum, if True return the average.
- Return type:
List
[TypeVar
(Tensor
)]- Returns:
A list of tensors, representing the reduced versions of the input parameter tensors.
- class pfl.internal.ops.distributed.NotDistributedContext#
Single-process “distributed” context. Can be used to not do distributed training.
- property local_rank: int#
The rank of the current process over all processes on current machine.
- property global_rank: int#
The rank of the current process over all processes on all machines.
- property world_size: int#
The total number of processes over all machines.
- property local_size: int#
The total number of processes on 1 machines.
- all_reduce(tensors, average=False)#
Performs all reduce between processes on a list of tensors. When one process calls this method, it will block until all_reduce has been called on all other processes as well. Processes may be scattered across multiple workers.
- Parameters:
tensors (
List
[TypeVar
(Tensor
)]) – A list of tensors to reduce between processes.average (
bool
) – If False return sum, if True return the average.
- Return type:
List
[TypeVar
(Tensor
)]- Returns:
A list of tensors, representing the reduced versions of the input parameter tensors.
Numpy ops#
- class pfl.internal.ops.numpy_ops.NumpyHorovodDistributedContext(module_name)#
Distributed training operations for NumPy tensors using a Horovod backend. Initializing an instance of this class performs the Horovod setup.
- Parameters:
module_name (
str
) – The Horovod api to use. Most commonly ‘tensorflow’ or ‘pytorch’.
- class pfl.internal.ops.numpy_ops.NumpySeedScope(seed=None)#
Context manager for temporarily using another NumPy random state from the given seed.
- Parameters:
seed – The seed for the temporary random state.
- pfl.internal.ops.numpy_ops.get_shape(variable)#
Get the shape of a
np.ndarray
.- Variable:
A
np.ndarray
.- Returns:
A tuple representing the shape.
- pfl.internal.ops.numpy_ops.is_tensor(variable)#
Check whether the input is a Numpy array.
- pfl.internal.ops.numpy_ops.add_laplacian_noise(tensors, scale, seed)#
Add zero mean Laplacian noise to numpy arrays.
- Parameters:
tensors (
List
[ndarray
]) – A list of numpy arrays to add noise to.scale (
float
) – The noise scale b of laplacian noise.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[ndarray
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.numpy_ops.add_gaussian_noise(tensors, stddev, seed)#
Add zero mean Gaussian noise to numpy arrays.
- Parameters:
tensors (
List
[ndarray
]) – A list of numpy arrays to add noise to.stddev (
float
) – Standard deviation of noise to add.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[ndarray
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.numpy_ops.norm(tensor, order)#
Calculate the norm of a numpy array.
- Parameters:
tensor (
ndarray
) – A numpy array to calculate the norm for.order – The order of the distance metric.
- Return type:
float
- Returns:
The norm.
- pfl.internal.ops.numpy_ops.global_norm(tensors, order)#
Calculate the norm of the concatenation of the arrays.
- Parameters:
tensors (
List
[ndarray
]) – A list of numpy arrays to calculate global norm for.order (
float
) – The order of the distance metric.
- Return type:
float
- Returns:
The global norm.
- pfl.internal.ops.numpy_ops.flatten(tensors)#
Flatten a list of numpy arrays into a single vector.
- Parameters:
tensors (
List
[ndarray
]) – A list of numpy arrays to flatten.- Return type:
Tuple
[ndarray
,List
[Tuple
],List
[Type
]]- Returns:
(vector, shapes, dtypes), where vector is the flattened vector, shapes is a list of shapes of the input arrays and dtypes is a list of types of the input arrays. shapes and dtypes can be used with the reshape function to recover the original list of weights.
- pfl.internal.ops.numpy_ops.reshape(vector, shapes, dtypes=None)#
Split and reshape a vector into a list of numpy arrays.
- Parameters:
vector (
ndarray
) – A 1-dimensional numpy array to split and reshape.shapes (
List
[Tuple
]) – A list of tuples of integers, representing the shapes of multiple target weights to construct.dtypes (
Optional
[List
[Type
]]) – A list of types for the new weights.
- Return type:
List
[ndarray
]- Returns:
A list of numpy arrays constructed from the inputs.
- pfl.internal.ops.numpy_ops.to_tensor(tensor, dtype='float32')#
Convert a numpy array to numpy array, i.e. identity in this case.
- Return type:
ndarray
- pfl.internal.ops.numpy_ops.to_numpy(tensor, dtype='float32')#
Convert a numpy array to numpy array, i.e. identity in this case.
- Return type:
ndarray
- pfl.internal.ops.numpy_ops.clone(tensor)#
Clone a numpy array.
- Return type:
ndarray
- pfl.internal.ops.numpy_ops.clone_variable(variable, name)#
Return a cloned copy of Numpy Array.
- Parameters:
variable (
ndarray
) – Anp.ndarray
.name – An unused argument to match the signature of TensorFlow internal ops.
- Return type:
ndarray
- Returns:
A
np.ndarray
that is a cloned copy ofvariable
.
- pfl.internal.ops.numpy_ops.assign_variable(reference, value)#
Assign value to reference variable.
- Parameters:
reference (
ndarray
) – Anp.ndarray
that will be assigned tovalue
.value (
ndarray
) – Anp.ndarray
whose value is assigned toreference
.
- Return type:
None
- pfl.internal.ops.numpy_ops.exponential_moving_average_update(variables, ema_variables, decay)#
Perform one step of EMA update for a list of variables and a list of paired EMA variables. For each (variable, EMA variable) pair, the update is as following:
ema_variable -= (1 - decay) * (ema_variable - variable)
.- Parameters:
variables (
List
[ndarray
]) – A list ofnp.ndarray
representing the current values.ema_variables (
List
[ndarray
]) – A list ofnp.ndarray
representing the EMA values to be updated.decay (
float
) – Afloat
defining the EMA decay rate.
- Return type:
None
- pfl.internal.ops.numpy_ops.one_hot(indices, depth)#
One-hot encode indices to vector with depth dimension.
- Parameters:
indices (
ndarray
) – A vector of indices to be one-hot encoded.depth (
int
) – The dimension of one-hot encoding.
- Return type:
ndarray
- Returns:
One-hot encoded vectors.
- pfl.internal.ops.numpy_ops.concatenate(tensors, axis)#
Join a list of tensors along an existing axis.
- Parameters:
tensors (
List
[ndarray
]) – List of tensors to be concatenated.axis (
int
) – Axis to concatenate the tensors.
- Return type:
ndarray
- Returns:
A concatenated tensor.
PyTorch ops#
- pfl.internal.ops.pytorch_ops.setup_amp(amp_dtype, grad_scaling)#
Setup torch.amp.autocast context and torch.cuda.amp.GradScaler for PyTorch native mixed precision training. Gradient scaling is only used when training on CUDA.
- Parameters:
amp_dtype (
Optional
[dtype
]) – An optional torch.dtype indicating the precision level. If set to None then mix precision training is not enabled.grad_scaling (
bool
) – Whether to turn on gradient scaling when training on CUDA.
- Return type:
Tuple
[Optional
[autocast
],Optional
[GradScaler
]]- Returns:
A tuple of torch.amp.autocast context and torch.cuda.amp.GradScaler.
- class pfl.internal.ops.pytorch_ops.PyTorchDistributedContext#
Distributed training operations for PyTorch tensors using torch.distributed backend. Initializing an instance of this class starts the PyTorch server on each worker and synchronizes. Only supports single process, single GPU, multi-worker training.
- property local_rank: int#
The rank of the current process over all processes on current machine.
- property global_rank: int#
The rank of the current process over all processes on all machines.
- property world_size: int#
The total number of processes over all machines.
- property local_size: int#
The total number of processes on 1 machines.
- all_reduce(tensors, average=False)#
Performs all reduce between processes on a list of tensors. When one process calls this method, it will block until all_reduce has been called on all other processes as well. Processes may be scattered across multiple workers.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to reduce between processes.average (
bool
) – If False return sum, if True return the average.
- Return type:
List
[Tensor
]- Returns:
A list of tensors, representing the reduced versions of the input parameter tensors.
- class pfl.internal.ops.pytorch_ops.PyTorchHorovodDistributedContext#
Distributed training operations for PyTorch tensors using a Horovod backend. Initializing an instance of this class performs the Horovod setup.
- pfl.internal.ops.pytorch_ops.get_shape(variable)#
Get the shape of a PyTorch variable.
- Parameters:
variable – A PyTorch tensor.
- Returns:
A tuple representing the shape.
- pfl.internal.ops.pytorch_ops.is_tensor(variable)#
Check whether the input is a PyTorch tensor.
- pfl.internal.ops.pytorch_ops.flatten(tensors)#
Flatten a list of PyTorch tensors into a single vector.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to flatten.- Return type:
Tuple
[Tensor
,List
[Tuple
],List
[dtype
]]- Returns:
(vector, shapes, dtypes), where vector is the flattened tensor, shapes is a list of shapes of the input arrays and dtypes is a list of types of the input arrays. shapes and dtypes can be used with the reshape function to recover the original list of weights.
- pfl.internal.ops.pytorch_ops.reshape(vector, shapes, dtypes=None)#
Split and reshape a vector into a list of PyTorch tensors.
- Parameters:
vector (
Tensor
) – A 1-dimensional tensor to split and reshape.shapes (
List
[Tuple
]) – A list of tuples of integers, representing the shapes of multiple target weights to construct.dtypes (
Optional
[List
[dtype
]]) – A list of types for the new weights.
- Return type:
List
[Tensor
]- Returns:
A list of PyTorch tensors constructed from the inputs.
- pfl.internal.ops.pytorch_ops.simulate_bfloat16_transport(ndarray)#
Convert a numpy array to bfloat16 and then back to float32
- class pfl.internal.ops.pytorch_ops.PyTorchSeedScope(seed=None)#
Context manager for temporarily using another PyTorch random state from the given seed.
- Parameters:
seed – The seed for the temporary random state.
- pfl.internal.ops.pytorch_ops.add_gaussian_noise(tensors, stddev, seed)#
Add zero mean Gaussian noise to tensors. Transferring data to GPU, adding noise, and back to NumPy is faster than np.random.normal.
- Parameters:
tensors (
List
[ndarray
]) – A list of tensors to add noise to.stddev (
float
) – Standard deviation of noise to add.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[Tensor
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.pytorch_ops.add_laplacian_noise(tensors, scale, seed)#
Add zero mean Laplacian noise to tensors.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to add noise to.scale (
float
) – Scaling factor of noise to add.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[Tensor
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.pytorch_ops.clone(tensor)#
Make a copy of the input tensor.
- Return type:
Tensor
- pfl.internal.ops.pytorch_ops.norm(tensor, order)#
Calculate the norm of a PyTorch tensor.
- Parameters:
tensor (
Tensor
) – A tensor to calculate the norm for.order – The order of the distance metric (norm).
- Return type:
Tensor
- Returns:
The norm.
- pfl.internal.ops.pytorch_ops.global_norm(tensors, order)#
Calculate the norm of the concatenation of the arrays.
- Parameters:
tensors (
List
[Tensor
]) – A list of numpy arrays to calculate global norm for.order (
float
) – The order of the distance metric.
- Return type:
Tensor
- Returns:
The global norm.
- pfl.internal.ops.pytorch_ops.to_numpy(tensor)#
Convert a PyTorch tensor to a numpy array.
- Return type:
ndarray
- pfl.internal.ops.pytorch_ops.to_tensor(values, dtype='float32')#
Convert a list of values or a numpy array to a float32 Torch tensor.
- Return type:
Tensor
- pfl.internal.ops.pytorch_ops.clone_variable(variable, name)#
Return a cloned copy of PyTorch tensor.
- Parameters:
variable (
Tensor
) – Atorch.Tensor
.name – An unused argument to match the signature of TensorFlow internal ops.
- Return type:
Tensor
- Returns:
A
torch.Tensor
that is a cloned copy ofvariable
.
- pfl.internal.ops.pytorch_ops.assign_variable(reference, value)#
Assign value to reference variable.
- Parameters:
reference (
Tensor
) – Atorch.Tensor
that will be assigned tovalue
.value (
Tensor
) – Atorch.Tensor
whose value is assigned toreference
.
- Return type:
None
- pfl.internal.ops.pytorch_ops.exponential_moving_average_update(variables, ema_variables, decay)#
Perform one step of EMA update for a list of variables and a list of paired EMA variables. For each (variable, EMA variable) pair, the update is as following:
ema_variable -= (1 - decay) * (ema_variable - variable)
.- Parameters:
variables (
List
[Tensor
]) – A list oftorch.Tensor
representing the current values.ema_variables (
List
[Tensor
]) – A list oftorch.Tensor
representing the EMA values to be updated.decay (
float
) – Afloat
defining the EMA decay rate.
- Return type:
None
- pfl.internal.ops.pytorch_ops.one_hot(indices, depth)#
One-hot encode indices to vector with depth dimension.
- Parameters:
indices (
Tensor
) – A vector of indices to be one-hot encoded.depth (
int
) – The dimension of one-hot encoding.
- Return type:
Tensor
- Returns:
One-hot encoded vectors.
- pfl.internal.ops.pytorch_ops.concatenate(tensors, axis)#
Join a list of tensors along an existing axis.
- Parameters:
tensors (
List
[Tensor
]) – List of tensors to be concatenated.axis (
int
) – Axis to concatenate the tensors.
- Return type:
Tensor
- Returns:
A concatenated tensor.
- class pfl.internal.ops.pytorch_ops.GradAccumulationState(num_steps, accumulation_steps)#
Track gradient accumulation during local training.
- property optimizer_should_update: bool#
Update every grad_accumulation_steps or is the last step
- class pfl.internal.ops.pytorch_ops.PyTorchTrainStepArgs(amp_context, grad_scaler, max_grad_norm, grad_accumulation_state)#
Common args used by different local training algorithms in PyTorch.
TensorFlow ops#
- pfl.internal.ops.tensorflow_ops.try_cached_call(fn, key, *args, **kwargs)#
Call the graph of fn which is a tf.Function. If the graph of fn exists in pfl’s cache, use the cached graph. This will result in significant speedups in TF>2.3 because this is bypassing TensorFlow’s graph cache in tf.function (which has become incredibly slow).
This feature can be disabled with the environment variable PFL_GRAPH_CACHE=false and should be done if one recognizes that pfl’s graph cache is regenerated too much (because it is not very sophisticated yet).
- Parameters:
fn – A function decorated with tf.Function.
key – A key for caching the graph of fn.
args – Arguments for calling fn.
kwargs – Keyword arguments for calling fn.
- Returns:
The returned value from fn.
- class pfl.internal.ops.tensorflow_ops.TFDistributedContext#
Distributed training operations for TF tensors using tensorflow.distribute backend.
Initializing an instance of this class starts the TF servers and a distribution strategy that waits for synchronisation. If using distributed simulations, initialize a
MultiWorkerMirroredStrategy
, otherwise initialize aOneDeviceStrategy
.Only supports single process, single GPU, multi-worker training.
- property local_rank: int#
The rank of the current process over all processes on current machine.
- property global_rank: int#
The rank of the current process over all processes on all machines.
- property world_size: int#
The total number of processes over all machines.
- property local_size: int#
The total number of processes on 1 machines.
- all_reduce(tensors, average=False)#
Performs all reduce between processes on a list of tensors. When one process calls this method, it will block until all_reduce has been called on all other processes as well. Processes may be scattered across multiple workers.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to reduce between processes.average (
bool
) – If False return sum, if True return the average.
- Return type:
List
[Tensor
]- Returns:
A list of tensors, representing the reduced versions of the input parameter tensors.
- class pfl.internal.ops.tensorflow_ops.TFHorovodDistributedContext#
Distributed training operations for TF tensors using a Horovod backend. Initializing an instance of this class performs the Horovod setup.
- pfl.internal.ops.tensorflow_ops.get_shape(variable)#
Get the shape of a TensorFlow variable.
- Variable:
A
tf.Variable
.- Returns:
A tuple representing the shape.
- pfl.internal.ops.tensorflow_ops.is_tensor(variable)#
Check whether the input is a TensorFlow tensor or variable.
- pfl.internal.ops.tensorflow_ops.simulate_bfloat16_transport(tensor)#
Convert a tensor to bfloat16 and then back to float32
- pfl.internal.ops.tensorflow_ops.add_gaussian_noise(tensors, stddev, seed)#
Add zero mean Gaussian noise to tensors.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to add noise to.stddev (
float
) – Standard deviation of noise to add.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[Tensor
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.tensorflow_ops.add_laplacian_noise(tensors, scale, seed)#
Add zero mean Laplacian noise to tensors.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to add noise to.scale (
float
) – Scaling factor of noise to add.seed (
Optional
[int
]) – An integer for seed.
- Return type:
List
[Tensor
]- Returns:
Same as tensors but with noise added.
- pfl.internal.ops.tensorflow_ops.clone(tensor)#
Make a copy of the input tensor.
- Return type:
Tensor
- pfl.internal.ops.tensorflow_ops.flatten(tensors)#
Flatten a list of tensors into a single vector.
- Parameters:
tensors (
List
[Tensor
]) – A list of tensors to flatten.- Return type:
Tuple
[Tensor
,List
[Tuple
],List
[Type
]]- Returns:
(vector, shapes, dtypes), where vector is the flattened tensor, shapes is a list of shapes of the input arrays and dtypes is a list of types of the input arrays. shapes and dtypes can be used with the reshape function to recover the original list of weights.
- pfl.internal.ops.tensorflow_ops.reshape(vector, shapes, dtypes=None)#
Split and reshape a vector into a list of TF tensors.
- Parameters:
vector (
Tensor
) – A 1-dimensional tensor to split and reshape.shapes (
List
[Tuple
]) – A list of tuples of integers, representing the shapes of multiple target weights to construct.dtypes (
Optional
[List
[Type
]]) – A list of types for the new weights.
- Return type:
List
[Tensor
]- Returns:
A list of TF tensors constructed from the inputs.
- pfl.internal.ops.tensorflow_ops.norm(tensor, order)#
Calculate the norm of a tensor.
- Parameters:
tensor (
Tensor
) – A tensor to calculate the norm for.order – The order of the distance metric (norm).
- Return type:
Tensor
- Returns:
The norm.
- pfl.internal.ops.tensorflow_ops.global_norm(tensors, order)#
Calculate the norm of the concatenation of the arrays.
- Parameters:
tensors (
List
[ndarray
]) – A list of numpy arrays to calculate global norm for.order (
float
) – The order of the distance metric.
- Return type:
Tensor
- Returns:
The global norm.
- pfl.internal.ops.tensorflow_ops.to_numpy(tensor)#
Convert a tensor to a numpy array.
- Return type:
ndarray
- pfl.internal.ops.tensorflow_ops.to_tensor(values, dtype='float32')#
Convert a list of values or a numpy array to a TF tensor.
- Return type:
Tensor
- pfl.internal.ops.tensorflow_ops.clone_variable(variable, name)#
Return a cloned copy of TensorFlow variable.
- Parameters:
variable (
Variable
) – Atf.Variable
.name (
str
) – Astr
name for the cloned variable.
- Return type:
Variable
- Returns:
A
tf.Variable
that is a cloned copy ofvariable
.
- pfl.internal.ops.tensorflow_ops.assign_variable(reference, value)#
Assign value to reference variable.
- Parameters:
reference (
Variable
) – Atf.Variable
that will be assigned tovalue
.value (
Variable
) – Atf.Variable
whose value is assigned toreference
.
- Return type:
None
- pfl.internal.ops.tensorflow_ops.exponential_moving_average_update(variables, ema_variables, decay)#
Perform one step of EMA update for a list of variables and a list of paired EMA variables. For each (variable, EMA variable) pair, the update is as following:
ema_variable -= (1 - decay) * (ema_variable - variable)
.- Parameters:
variables (
List
[Variable
]) – A list oftf.Variable
representing the current values.ema_variables (
List
[Variable
]) – A list oftf.Variable
representing the EMA values to be updated.decay (
float
) – Afloat
defining the EMA decay rate.
- Return type:
None
- pfl.internal.ops.tensorflow_ops.one_hot(indices, depth)#
One-hot encode indices to vector with depth dimension.
- Parameters:
indices (
Tensor
) – A vector of indices to be one-hot encoded.depth (
int
) – The dimension of one-hot encoding.
- Return type:
ndarray
- Returns:
One-hot encoded vectors.
- pfl.internal.ops.tensorflow_ops.concatenate(tensors, axis)#
Join a list of tensors along an existing axis.
- Parameters:
tensors (
List
[Tensor
]) – List of tensors to be concatenated.axis (
int
) – Axis to concatenate the tensors.
- Return type:
Tensor
- Returns:
A concatenated tensor.
- class pfl.internal.ops.tensorflow_ops.KerasMetricValue(keras_metric, labels=None, predictions=None, state=None)#
Wrapper for representing a
tf.keras.metrics.Metric
as aMetricValue
to be compatible with pfl framework.- Parameters:
keras_metric – The Keras metric to use for accumulating measurements of a metric. Keras metrics are mutable, but
KerasMetricValue
is not.labels – Ground-truth labels that are used with
predictions
to set the state of the metric value.predictions –
labels
andpredictions
are used to set the state of the metric value. Unliketf.keras.metrics.Metric
, the state doesn’t change. You should instead accumulate a metric value with addition of twoKerasMetricValue
objects.state – Specify the state of keras_metric directly instead of generating it from
labels
andpredictions
. Don’t setlabels
andpredictions
ifstate
is set.
- property overall_value#
Return the overall value, e.g. an average or a total.
- to_vector()#
Get a vector representation of this metric value, with
dtype=float32
. Summing two vectors in this space must be equivalent to summing the two original objects.- Return type:
ndarray
- from_vector(vector)#
Create a new metric value of this class from a vector representation.
Note that this is a method on an object of this class, since it is possible that runtime attributes that do not change with addition are not serialized.
- Return type: