Bridges#

Bridge base classes#

This module contains abstract interfaces that act as bridges between framework-agnostic code and specific deep learning frameworks.

Extending CommonFrameworkBridge is required at a minimum for simulations to support a deep learning framework. Additional bridges can be implemented to support additional algorithms and deep learning frameworks.

This is similar to pfl.internal.ops, but distinct for these reasons:

  • Ops module is the lowest module of primitive framework-specific code that can be injected anywhere in the code. It does not depend on any other modules.

  • This module works similar to ops module, but can have dependencies on data structures like Statistics, Metrics, Dataset, Model. Since ops module does not have any pfl components as dependencies, it can be used in the implementation of the data structures mentioned, while this module of bridges is restricted to higher-level components which it does not self-reference, e.g. Algorithm and Privacy mechanism.

class pfl.internal.bridge.base.CommonFrameworkBridge(*args, **kwargs)#

Functions that need framework-specific implementations and that are required for rudimentary support of that deep learning framework in pfl. All other bridges than this interface are optional for supporting certain algorithms.

static save_state(state, path)#

Save state to disk at path path.

static load_state(path)#

Load state from disk at path path.

class pfl.internal.bridge.base.SGDFrameworkBridge(*args, **kwargs)#

Interface for functions that manipulate the model using stochastic gradient descent and need framework-specific implementations.

static do_sgd(model, user_dataset, train_params)#

Do multiple epochs of SGD with the given input data.

Parameters:
  • model (TypeVar(StatefulModelType_contra, bound= StatefulModel, contravariant=True)) – The model to train.

  • user_dataset (TypeVar(AbstractDatasetType, bound= AbstractDataset)) – Dataset of type Dataset to train on.

  • train_params (TypeVar(ModelHyperParamsType_contra, bound= ModelHyperParams, contravariant=True)) – An instance of ModelHyperParams containing configuration for training.

Return type:

None

class pfl.internal.bridge.base.FedProxFrameworkBridge(*args, **kwargs)#

Interface for implementing the FedProx algorithm, by T. Li. et al. - Federated Optimization in Heterogeneous Networks (https://arxiv.org/pdf/1812.06127.pdf), for a particular Deep Learning framework.

static do_proximal_sgd(model, user_dataset, train_params, mu)#

Do multiple local epochs of SGD with the FedProx proximal term added to the loss (Equation 2)

Return type:

None

class pfl.internal.bridge.base.SCAFFOLDFrameworkBridge(*args, **kwargs)#

Interface for implementing the SCAFFOLD algorithm, by S. P. Karimireddy et al. - SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. (https://proceedings.mlr.press/v119/karimireddy20a/karimireddy20a.pdf), for a particular Deep Learning framework.

static do_control_variate_sgd(model, user_dataset, train_params, local_c, server_c)#

Do multiple local epochs of SGD with local control variate ($c_i$) and server control variate ($c$), see Algorithm 1.

Return type:

None

class pfl.internal.bridge.base.FTRLFrameworkBridge(*args, **kwargs)#

Interface for implementing factorizing banded matrix for FTRL mechanism, by Choquette-Choo et al. - (Amplified) Banded Matrix Factorization: A unified approach to private training (https://arxiv.org/pdf/2306.08153.pdf), for a particular deep Learning framework.

static loss_and_gradient(A, X, mask)#

Computes the loss $ r[A^T A X^{-1}]$ and the associated gradient $dX = -X^{-1} A^T A X^{-1}$ from the optimization problem in Equation 6 in https://arxiv.org/pdf/2306.08153.pdf.

Return type:

Tuple[TypeVar(Tensor), TypeVar(Tensor)]

static lbfgs_direction(X, dX, prev_X, prev_dX)#

Given the current/previous iterates (X and X1) and the current/previous gradients (dX and dX1), compute a search direction (Z) according to the LBFGS update rule.

Return type:

TypeVar(Tensor)

static terminate_fn(dX)#

Criterion to terminate optimization based on dX.

Return type:

bool

Bridge Factory#

class pfl.internal.bridge.factory.FrameworkBridgeFactory#

A collection of bridges to deep learning specific implementations for several algorithms. The bridge returned depends on the Deep Learning framework in use. This way, we can inject framework-specific code into an algorithm, and only have one implementation of each algorithm in the public interface, e.g. one public FedAvg class instead of one for each of TF, PyTorch, etc.

Each method returns a class with utility functions for a particular algorithm.

Numpy Bridges#

class pfl.internal.bridge.numpy.common.NumpyCommonBridge(*args, **kwargs)#
static save_state(state, path)#

Save state to disk at path path.

static load_state(path)#

Load state from disk at path path.

PyTorch Bridges#

class pfl.internal.bridge.pytorch.common.PyTorchCommonBridge(*args, **kwargs)#
static save_state(state, path)#

Save state to disk at path path.

static load_state(path)#

Load state from disk at path path.

Primal optimization algorithms for multi-epoch matrix factorization. Reference: https://github.com/google-research/federated/blob/master/multi_epoch_dp_matrix_factorization/multiple_participations/primal_optimization.py. # pylint: disable=line-too-long

class pfl.internal.bridge.pytorch.ftrl.PyTorchFTRLBridge(*args, **kwargs)#
static loss_and_gradient(A, X, mask)#

Computes the loss $ r[A^T A X^{-1}]$ and the associated gradient $dX = -X^{-1} A^T A X^{-1}$ from the optimization problem in Equation 6 in https://arxiv.org/pdf/2306.08153.pdf.

Return type:

Tuple[Tensor, Tensor]

static lbfgs_direction(X, dX, prev_X, prev_dX)#

Given the current/previous iterates (X and X1) and the current/previous gradients (dX and dX1), compute a search direction (Z) according to the LBFGS update rule.

Return type:

Tensor

static terminate_fn(dX)#

Criterion to terminate optimization based on dX.

Return type:

bool

class pfl.internal.bridge.pytorch.proximal.PyTorchFedProxBridge(*args, **kwargs)#

Concrete implementation of FedProx utilities in PyTorch, used by FedProx algorithm.

static do_proximal_sgd(model, user_dataset, train_params, mu)#

Do multiple local epochs of SGD with the FedProx proximal term added to the loss (Equation 2)

Return type:

None

class pfl.internal.bridge.pytorch.sgd.PyTorchSGDBridge(*args, **kwargs)#

Concrete PyTorch implementations of utils for stochastic gradient descent.

static do_sgd(model, user_dataset, train_params)#

Do multiple epochs of SGD with the given input data.

Parameters:
  • model (PyTorchModel) – The model to train.

  • user_dataset (TypeVar(AbstractDatasetType, bound= AbstractDataset)) – Dataset of type Dataset to train on.

  • train_params (NNTrainHyperParams) – An instance of ModelHyperParams containing configuration for training.

Return type:

None

TensorFlow Bridges#

pfl.internal.bridge.tensorflow.common.get_or_make_tf_function(model, fn)#

Lookup tf function in cache or create it. One graph per model with unique uuid is created.

class pfl.internal.bridge.tensorflow.common.TFCommonBridge(*args, **kwargs)#
static save_state(state, path)#

Save state to disk at path path.

static load_state(path)#

Load state from disk at path path.

Primal optimization algorithms for multi-epoch matrix factorization. Reference: https://github.com/google-research/federated/blob/master/multi_epoch_dp_matrix_factorization/multiple_participations/primal_optimization.py. # pylint: disable=line-too-long

class pfl.internal.bridge.tensorflow.ftrl.TFFTRLBridge(*args, **kwargs)#
static loss_and_gradient(A, X, mask)#

Computes the loss $ r[A^T A X^{-1}]$ and the associated gradient $dX = -X^{-1} A^T A X^{-1}$ from the optimization problem in Equation 6 in https://arxiv.org/pdf/2306.08153.pdf.

Return type:

Tuple[Tensor, Tensor]

static lbfgs_direction(X, dX, prev_X, prev_dX)#

Given the current/previous iterates (X and X1) and the current/previous gradients (dX and dX1), compute a search direction (Z) according to the LBFGS update rule.

Return type:

Tensor

static terminate_fn(dX)#

Criterion to terminate optimization based on dX.

Return type:

bool

class pfl.internal.bridge.tensorflow.proximal.TFFedProxBridge(*args, **kwargs)#

Concrete implementation of FedProx utilities in TF2, used by FedProx algorithm.

static do_proximal_sgd(model, user_dataset, train_params, mu)#

Do multiple local epochs of SGD with the FedProx proximal term added to the loss (Equation 2)

Return type:

None

class pfl.internal.bridge.tensorflow.sgd.TFSGDBridge(*args, **kwargs)#

Concrete TF implementations of utils for stochastic gradient descent.

static do_sgd(model, user_dataset, train_params)#

Do multiple epochs of SGD with the given input data.

Parameters:
  • model (TFModel) – The model to train.

  • user_dataset (TypeVar(AbstractDatasetType, bound= AbstractDataset)) – Dataset of type Dataset to train on.

  • train_params (NNTrainHyperParams) – An instance of ModelHyperParams containing configuration for training.

Return type:

None