Differential privacy#

Abstract base classes#

Apply differential privacy to statistics.

class pfl.privacy.privacy_mechanism.PrivacyMetricName(description, is_local_privacy, on_summed_stats=False)#

A structured name for privacy metrics which includes whether it was generated using a local privacy mechanism or central privacy mechanism.

Parameters:
  • description (str) – The metric name represented as a string.

  • is_local_privacy (bool) – True if metric is related to local DP, False means central DP.

  • on_summed_stats (bool) – True if metric is calculated on summed stats. Usually only true for the noise operation of central DP.

class pfl.privacy.privacy_mechanism.PrivacyMechanism#

Base class for privacy mechanisms.

class pfl.privacy.privacy_mechanism.LocalPrivacyMechanism#

Base class for mechanisms that convert statistics into their local differentially private version. This will often perform clipping and then add noise.

Bounds on the privacy loss (for example, epsilon and delta) should be passed in as parameters when constructing the object. These are the parameters that would be baked in on device.

abstract privatize(statistics, name_formatting_fn=<function LocalPrivacyMechanism.<lambda>>, seed=None)#

Take unbounded statistics from one individual and turn them directly into statistics that protect the individual’s privacy using differential privacy.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a version of statistics that has been constrained and has had noise added to it (i.e., it has been privatized). metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

postprocess_one_user(*, stats, user_context)#

Do any postprocessing of client’s statistics before it is communicated back to the server.

Parameters:
  • stats (TrainingStatistics) – Statistics returned from the local training procedure of this user.

  • user_context (UserContext) – Additional information about the current user.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

postprocess_server(*, stats, central_context, aggregate_metrics)#

Do any postprocessing of the aggregated statistics object after central aggregation.

Parameters:
  • stats (TrainingStatistics) – The aggregated statistics.

  • central_context (CentralContext) – Information about aggregation and other useful server-side properties.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

postprocess_server_live(*, stats, central_context, aggregate_metrics)#

Just like postprocess_server, but for live training. Default implementation is to call postprocess_server. Only override this in certain circumstances when you want different behaviour for live training, e.g. central DP.

Return type:

Tuple[TrainingStatistics, Metrics]

class pfl.privacy.privacy_mechanism.SplitPrivacyMechanism#

Base class for privacy mechanism that works in two stages: (1) constrain the sensitivity; (2) add noise.

This is the case for many mechanisms. Some of those can be used for central privacy, but not all. Even where not, they can sometimes be approximated by a central privacy mechanism.

abstract constrain_sensitivity(statistics, name_formatting_fn=<function SplitPrivacyMechanism.<lambda>>, seed=None)#

Constrain the sensitivity of the statistics, e.g. by norm-clipping. This makes it possible to determine the amount of noise necessary to guarantee differential privacy.

Parameters:
  • statistics (TrainingStatistics) – The statistics that need to be constrained.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (constrained_statistics, metrics). constrained_statistics is a new TrainingStatistics which is a version statistics that adheres to the sensitivity. metrics is a dict description: value where value is a value that can be useful to display or analyse. For example, this could have statistics on the clipping performed.

abstract add_noise(statistics, cohort_size, name_formatting_fn=<function SplitPrivacyMechanism.<lambda>>, seed=None)#

Transform statistics to protect the privacy of the data with differential privacy, for example by adding noise. It is assumed that the contribution of any individual user to statistics has been limited using constrain_sensitivity.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • cohort_size (int) – The number of individuals whose data has gone into statistics. This is required in particular for approximations of local DP.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a clipped/noised version of clipped_statistics. metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

privatize(statistics, name_formatting_fn=<function SplitPrivacyMechanism.<lambda>>, seed=None)#

Take unbounded statistics from one individual and turn them directly into statistics that protect the individual’s privacy using differential privacy.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a version of statistics that has been constrained and has had noise added to it (i.e., it has been privatized). metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

class pfl.privacy.privacy_mechanism.CentrallyApplicablePrivacyMechanism#

Base class for local privacy mechanisms that can be applied centrally to approximate the local privacy mechanism more efficiently. Classes representing such mechanisms should derive from this.

To apply the mechanism centrally, constrain_sensitivity will be called on each contribution, and add_noise on the aggregate.

class pfl.privacy.privacy_mechanism.CentralPrivacyMechanism#

Base class for differential privacy mechanisms which provides central differential privacy.

This means that postprocess_one_user may apply processing to ensure sensitivity, and postprocess_server will transform the aggregate statistics randomly.

class pfl.privacy.privacy_mechanism.CentrallyAppliedPrivacyMechanism(underlying_mechanism)#

Wrap a local privacy mechanism (which is a local privacy mechanism by default), and transform it into a central privacy mechanism. The wrapped mechanism is transformed to perform constrain_sensitivity to individual contributions and add_noise on the aggregated statistics server-side.

This also means that in the standard case, scaling can happen in constrain_sensitivity and be undone in add_noise. For example, constrain_sensitivity can choose to clip to a clipping bound, or to 1. In the latter case, add_noise should probably scale by the clipping bound.

Parameters:

underlying_mechanism (CentrallyApplicablePrivacyMechanism) – The privacy mechanism (which is a local privacy mechanism by default) to transform into a central privacy mechanism.

postprocess_one_user(*, stats, user_context)#

Do any postprocessing of client’s statistics before it is communicated back to the server.

Parameters:
  • stats (TrainingStatistics) – Statistics returned from the local training procedure of this user.

  • user_context (UserContext) – Additional information about the current user.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

postprocess_server(*, stats, central_context, aggregate_metrics)#

Do any postprocessing of the aggregated statistics object after central aggregation.

Parameters:
  • stats (TrainingStatistics) – The aggregated statistics.

  • central_context (CentralContext) – Information about aggregation and other useful server-side properties.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

postprocess_server_live(*, stats, central_context, aggregate_metrics)#

Just like postprocess_server, but for live training. Default implementation is to call postprocess_server. Only override this in certain circumstances when you want different behaviour for live training, e.g. central DP.

Return type:

Tuple[TrainingStatistics, Metrics]

class pfl.privacy.privacy_mechanism.NormClipping(order, clipping_bound)#

Constrain the sensitivity of an individual’s data by clipping the ℓp norm. This clipping is the first step in many privacy mechanisms. This class implements one half of LocalPrivacyMechanism.

Parameters:
  • order (float) – The order of the norm. This must be a positive integer (e.g., 1 or 2) or np.inf.

  • clipping_bound (Union[HyperParam[float], float]) – The norm bound for clipping.

constrain_sensitivity(statistics, name_formatting_fn=<function NormClipping.<lambda>>, seed=None)#
Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

Statistics with their overall norm bounded by clipping_bound. The norm of these statistics may be less.

Privacy mechanisms#

class pfl.privacy.NoPrivacy#

Dummy privacy mechanism that does not do anything, but presents the same interface as real privacy mechanisms. This is useful for testing functionality of the code without the impact of a privacy mechanism.

constrain_sensitivity(statistics, name_formatting_fn=<function NoPrivacy.<lambda>>, seed=None)#
Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (statistics, metrics). statistics is the input unchanged. metrics is empty.

add_noise(statistics, cohort_size, name_formatting_fn=<function NoPrivacy.<lambda>>, seed=None)#
Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (statistics, metrics). statistics is the input unchanged. metrics is empty.

class pfl.privacy.NormClippingOnly(order, clipping_bound)#

Dummy privacy mechanism that does not do any privacy but only lp-norm clipping, but presents the same interface as real privacy mechanisms. This is useful for testing the impact of clipping only.

Parameters:
  • order (float) – The order of the norm. This must be a positive integer (e.g., 1 or 2) or np.inf.

  • clipping_bound (Union[HyperParam[float], float]) – The norm bound for clipping.

The Laplace mechanism for differential privacy.

class pfl.privacy.laplace_mechanism.LaplaceMechanism(clipping_bound, epsilon)#

Apply the Laplace mechanism for differential privacy according to Section 3.3 in: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf

The l1 norm is computed over all the arrays and clipped by a bound to be able to use Definition 3.1. Thereafter, Laplacian noise, parameterized according to Definition 3.3, is added to all arrays.

Parameters:

epsilon (float) – The ε parameter of differential privacy. This gives an upper bound on the amount of privacy loss.

sensitivity_scaling(num_dimensions)#

Return scaling that needs to be applied to the output of constrain_sensitivity.

Parameters:

num_dimensions – The number of dimensions of the vector that this mechanism is applied on.

sensitivity_squared_error(num_dimensions, l2_norm)#

Return the expected squared error that is caused by random behaviour of the constrain_sensitivity method. Note that this does not include error introduced by clipping. If add_noise scales the output of constrain_sensitivity, that scaling does not have to be included. Instead just include it in sensitivity_scaling.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm (float) – The L2 norm of the vector that this mechanism is applied on.

add_noise_squared_error(num_dimensions, cohort_size)#

Return the expected squared error that is caused by the add_noise method.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm – The L2 norm of the vector that this mechanism is applied on.

add_noise(statistics, cohort_size, name_formatting_fn=<function LaplaceMechanism.<lambda>>, seed=None)#

Transform statistics to protect the privacy of the data with differential privacy, for example by adding noise. It is assumed that the contribution of any individual user to statistics has been limited using constrain_sensitivity.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • cohort_size (int) – The number of individuals whose data has gone into statistics. This is required in particular for approximations of local DP.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a clipped/noised version of clipped_statistics. metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

The Gaussian mechanism for differential privacy.

class pfl.privacy.gaussian_mechanism.GaussianMechanism(clipping_bound, relative_noise_stddev)#

Apply the Gaussian mechanism for differential privacy, which consists of scaling the statistics down to make their ℓ² norm smaller or equal than the clipping bound parameter, and adding Gaussian noise.

The ℓ² norm is computed over all the arrays in statistics (as if these were concatenated into a single vector). If the norm is greater than the clipping bound, the statistics are scaled down linearly so that the resulting ℓ² norm is equal to the bound. If the ℓ² norm is below the bound, the original values are passed through unaltered.

The Gaussian noise is then added to each statistic with the scale defined by the relative_noise_stddev as described below.

To ensure (epsilon, delta) differential privacy, it is important to ensure that relative_noise_stddev is sufficiently large. To initialize this class using epsilon and delta, rather than using the standard deviation of the Gaussian noise, use the method construct_single_iteration (this sets the noise standard deviation automatically to ensure (epsilon, delta) DP).

The differential privacy guarantee assumes that each user participates in the training at most once. For multiple iterations (which is the typical case in Federated Learning), it is recommended to use from_privacy_accountant.

Parameters:
  • clipping_bound (Union[HyperParam[float], float]) – The ℓ² norm bound for clipping the statistics (e.g. model updates) using constrain_sensitivity before sending them back to the server.

  • relative_noise_stddev (float) – The standard deviation of the Gaussian noise added to each statistic is defined as relative_noise_stddev * clipping_bound. The standard deviation thus increases linearly with the clipping bound and the multiplier is given by this parameter relative_noise_stddev.

sensitivity_scaling(num_dimensions)#

Return scaling that needs to be applied to the output of constrain_sensitivity.

Parameters:

num_dimensions – The number of dimensions of the vector that this mechanism is applied on.

sensitivity_squared_error(num_dimensions, l2_norm)#

Return the expected squared error that is caused by random behaviour of the constrain_sensitivity method. Note that this does not include error introduced by clipping. If add_noise scales the output of constrain_sensitivity, that scaling does not have to be included. Instead just include it in sensitivity_scaling.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm (float) – The L2 norm of the vector that this mechanism is applied on.

add_noise_squared_error(num_dimensions, cohort_size)#

Return the expected squared error that is caused by the add_noise method.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm – The L2 norm of the vector that this mechanism is applied on.

add_noise(statistics, cohort_size, name_formatting_fn=<function GaussianMechanism.<lambda>>, seed=None)#

Transform statistics to protect the privacy of the data with differential privacy, for example by adding noise. It is assumed that the contribution of any individual user to statistics has been limited using constrain_sensitivity.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • cohort_size (int) – The number of individuals whose data has gone into statistics. This is required in particular for approximations of local DP.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a clipped/noised version of clipped_statistics. metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

classmethod construct_single_iteration(clipping_bound, epsilon, delta)#

Construct an instance of GaussianMechanism from an ε and a δ. This is suitable for giving out data once. If you apply the noise to the same individual’s data multiple times, the privacy costs should be added up.

Parameters:
  • clipping_bound (Union[HyperParam[float], float]) – The norm bound for clipping.

  • epsilon (float) – The ε parameter of differential privacy. This gives an upper bound on the amount of privacy loss.

  • delta (float) – The δ (delta) parameter of (ε,δ)-differential privacy. This gives an upper bound on the probability that the privacy loss is more than ε.

Return type:

GaussianMechanism

classmethod from_privacy_accountant(accountant, clipping_bound)#

Construct an instance of GaussianMechanism from an instance of PrivacyAccountant.

Banded matrix factorization mechanism based on primal optimization algorithms. Reference: https://github.com/google-research/federated/blob/master/multi_epoch_dp_matrix_factorization/multiple_participations/primal_optimization.py. # pylint: disable=line-too-long

class pfl.privacy.ftrl_mechanism.FTRLMatrixFactorizer(workload_matrix, mask=None)#

Class for factorizing matrices for matrix mechanism based on solving the optimization problem in Equation 6 in https://arxiv.org/pdf/2306.08153.pdf.

Parameters:
  • workload_matrix (ndarray) – The input workload, n x n lower triangular matrix.

  • mask (Optional[ndarray]) – A boolean matrix describing the constraints on the gram matrix X = C^T C.

optimize(iters=1000)#

Optimize the strategy matrix with an iterative gradient-based method.

Return type:

TypeVar(Tensor)

class pfl.privacy.ftrl_mechanism.ForwardSubstitution(matrix, bandwidth=None)#

Solve for X in LX = Y in an online manner using forward substitution where L is a lower-triangular matrix, as in Algorithm 9 in https://arxiv.org/pdf/2306.08153.pdf.

Parameters:
  • matrix (ndarray) – The lower triangular matrix L.

  • bandwidth (Optional[int]) – Optional bandwidth of L.

step(y_i)#

At step i, $X_i = (Y_i - sum_{j=1}^{i-1} L_{i,j} X_j) / L_{i,i}$

Return type:

TypeVar(Tensor)

class pfl.privacy.ftrl_mechanism.BandedMatrixFactorizationMechanism(clipping_bound, num_iterations, min_separation, make_privacy_accountant)#

Banded matrix factorization mechanism https://arxiv.org/pdf/2306.08153.pdf. Matrix Mechanism in PFL privately estimates $AX = B(CX + Z)$, where $X in R^{Ttimes d} = [x_1, x_2, cdots x_T]$ is the series of aggregated gradients at each PFL iteration. $A$ is the workload matrix which is set to the lower triangular matrix with all ones. $Z$ is the Gaussian noise added. $BC = A$ is a factorization of $A$ such that the noise added is minimized.

In the banded matrix setting, $C$ is a lower triangular banded matrix and the mechanism can be written as $A(X + C^{-1}Z)$, where in each step, a correlated noise from previous steps is added.

Parameters:
  • clipping_bound (float) – The norm bound for clipping.

  • num_iterations (int) – The number of times the mechanism will be applied.

  • min_separation (int) – Minimum number of iteration gap between two participation of a single device.

  • make_privacy_accountant (Callable[[int], PrivacyAccountant]) – Lambda function that takes number of compositions as input and returns privacy accountant, of type PrivacyAccountantKind.

::example
make_privacy_accountant = lambda num_compositions:
    PLDPrivacyAccountant(num_compositions, **other_params)
BandedMatrixFactorizationMechanism(clipping_bound, num_iterations,
    min_separation, make_privacy_accountant)
add_noise(statistics, cohort_size, name_formatting_fn=<function BandedMatrixFactorizationMechanism.<lambda>>, seed=None)#

Transform statistics to protect the privacy of the data with differential privacy, for example by adding noise. It is assumed that the contribution of any individual user to statistics has been limited using constrain_sensitivity.

Parameters:
  • statistics (TrainingStatistics) – The statistics to be made differentially private.

  • cohort_size (int) – The number of individuals whose data has gone into statistics. This is required in particular for approximations of local DP.

  • name_formatting_fn – Function that formats a metric name appropriately.

  • seed (Optional[int]) – An int representing a seed to use for random noise operations. This is useful to avoid generating the same noise if there are replicated workers.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (noised_statistics, metrics) noised_statistics is a new TrainingStatistics, that is a clipped/noised version of clipped_statistics. metrics is a Metrics object with name: value where value can be useful to display or analyse. For example, this could have statistics on the properties of the noise added.

Privacy accountants#

class pfl.privacy.PrivacyAccountant(num_compositions, sampling_probability, mechanism, epsilon=None, delta=None, noise_parameter=None, noise_scale=1.0)#

Tracks the privacy loss over multiple composition steps. Either two or three of the variables epsilon, delta and noise_parameter must be defined. If two are defined, the remaining variable can be computed. If all three are defined a check will be performed to make sure a valid set of variable values has been provided.

Parameters:
  • num_compositions (int) – Maximum number of compositions to be performed with mechanism.

  • sampling_probability (float) – Maximum probability of sampling each entity being privatized. E.g. if the unit of privacy is one device, this is the probability of each device participating.

  • mechanism (str) – The noise mechanism used. E.g. Gaussian, Laplace.

  • epsilon (Optional[float]) – The privacy loss random variable. It controls how much the output of the mechanism can vary between two neighboring databases.

  • delta (Optional[float]) – The probability that all privacy will be lost.

  • noise_parameter (Optional[float]) – A parameter for DP noise. For the Gaussian mechanism, the noise parameter is the standard deviation of the noise. For the Laplace mechanism, the noise parameter is the scale of the noise.

  • noise_scale (float) – A value in [0, 1] multiplied with the standard deviation of the noise to be added for privatization. Typically used to experiment with lower sampling probabilities when it is not possible or desirable to increase the population size of the units being privatized, e.g. user devices.

property cohort_noise_parameter#

Noise parameter to be used on a cohort of users. Noise scale is considered.

class pfl.privacy.PLDPrivacyAccountant(num_compositions, sampling_probability, mechanism, epsilon=None, delta=None, noise_parameter=None, noise_scale=1.0, value_discretization_interval=0.0001, use_connect_dots=True, pessimistic_estimate=True, log_mass_truncation_bound=-50)#

Privacy Loss Distribution (PLD) privacy accountant, from dp-accounting package.

The PLD algorithm is based on: “Tight on budget?: Tight bounds for r-fold approximate differential privacy.”, Meiser and Mohammadi, in CCS, pages 247-264, 2018, https://eprint.iacr.org/2017/1034.pdf The Connect-the-Docts algorithm is based on: “Connect the Dots: Tighter Discrete Approximations of Privacy Loss Distributions”, Doroshenko et al., PoPETs 2022, https://arxiv.org/pdf/2207.04380.pdf This class supports Gaussian and Laplacian mechanisms.

Parameters:
  • value_discretization_interval (float) – The length of the dicretization interval for the privacy loss distribution. Rounding will occur to integer multiples of value_discretization_interval. Smaller values yield more accurate estimates of the privacy loss, while incurring higher compute and memory. Hence, larger values decrease computation time. Note that the accountant algorithm maintains similar error bounds as the value of value_discretization_interval is changed.

  • use_connect_dots (bool) – boolean indicating whether or not to use Connect-the-Dots algorithm by Doroshenko et al., which gives tighter discrete approximations of PLDs.

  • pessimistic_estimate (bool) – boolean indicating whether rounding used in PLD algorithm results in epsilon-hockey stick divergence computation yielding upper estimate to real value.

  • log_mass_truncation_bound (float) – The natural log of probability mass that may be discarded from noise distribution. Larger values will increase the error.

class pfl.privacy.PRVPrivacyAccountant(num_compositions, sampling_probability, mechanism, epsilon=None, delta=None, noise_parameter=None, noise_scale=1.0, eps_error=0.07, delta_error=1e-10)#

Privacy Random Variable (PRV) accountant, for heterogeneous composition, using prv-accountant package. prv-accountant package: https://pypi.org/project/prv-accountant/ Based on: “Numerical Composition of Differential Privacy”, Gopi et al., 2021, https://arxiv.org/pdf/2106.02848.pdf The PRV accountant methods compute_delta() and compute_epsilon() return a lower bound, an estimated value, and an upper bound for the delta and epsilon respectively. The estimated value is used for all further computations.

Parameters:
  • eps_error (Optional[float]) – Maximum permitted error in epsilon. Typically around 0.1.

  • delta_error (Optional[float]) – Maximum error allowed in delta. Typically around delta * 1e-3

class pfl.privacy.RDPPrivacyAccountant(num_compositions, sampling_probability, mechanism, epsilon=None, delta=None, noise_parameter=None, noise_scale=1.0)#

Privacy accountant using Renyi differential privacy (RDP) from dp-accounting package. Implementation in dp-accounting: https://github.com/google/differential-privacy/blob/main/python/dp_accounting/rdp/rdp_privacy_accountant.py # pylint: disable=line-too-long The default neighbouring relation for the RDP account is “add or remove one”. The default RDP orders used are: ([1 + x / 10. for x in range(1, 100)] + list(range(11, 64)) + [128, 256, 512, 1024]).

DP with adaptive clipping#

class pfl.privacy.adaptive_clipping.MutableClippingBound(initial_clipping_bound)#

A mutable hyperparameter for clipping bound used in adaptive clipping.

value()#

The current state (inner value) of the hyperparameter.

Return type:

float

class pfl.privacy.adaptive_clipping.AdaptiveClippingGaussianMechanism(make_gaussian_mechanism, initial_clipping_bound, clipping_indicator_noise_stddev, adaptive_clipping_norm_quantile, log_space_step_size=0.2)#

A CentrallyAppliedPrivacyMechanism class to implement adaptive clipping algorithm as in the paper: https://arxiv.org/pdf/1905.03871.pdf.

The algorithm automatically adjust the clipping bound by optimizing P(‖x‖₂ ≤ C) = 𝛾, where ‖x‖₂ is the model update ℓ² norm, C is the clipping bound and 𝛾 is the adaptive_clipping_norm_quantile. For example, setting 𝛾=0.1 and the algorithm will iteratively update the clipping bound C such that 10% of the device model update ℓ² norm will be less than C. Since the norms of model updates typically vary through the run (oftentimes decreasing over time), reducing the clipping and consequently also noise can be beneficial.

The algorithm requires collecting clipping indicator (i.e. whether the model update is clipped or not) for estimating the quantile that clipping bound C tracks to optimize C to the desired quantile 𝛾. Clipping indicator is encoded as -1 or 1 on device and thus the estimated quantile equals averaged clipping indicators / 2 + 0.5. Central DP noise (standard deviation set by clipping_indicator_noise_stddev) is added to the aggregated clipping indicator to protect the privacy. Noisy aggregated clipping indicator will then be used to update clipping bound using geometric update rule with step size η, i.e. the log_space_step_size argument.

:param make_gaussian_mechanism

A function that makes a Gaussian Mechanism given a clipping bound as input. For example:

example:
from pfl.privacy.gaussian_mechanism import GaussianMechanism
make_gaussian_mechanism = lambda c: GaussianMechanism(c, 1.0)
Parameters:
  • initial_clipping_bound (float) – The initial ℓ² clipping bound for Gaussian Mechanism.

  • clipping_indicator_noise_stddev (float) – Standard deviation of Gaussian noise added to the aggregated clipping indicator. Recommended value is 0.1 * cohort_size as suggested in https://arxiv.org/pdf/1905.03871.pdf.

  • adaptive_clipping_norm_quantile (float) – A quantile in [0, 1] representing the desired fraction of device model updates with ℓ² norm less than the clipping bound.

  • log_space_step_size (float) – Step size η for optimizing clipping bound in the log space. Clipping bound C is updated with logC ⟵ log(C exp(-ηg)) = logC - ηg where g is the derivative of quantile estimation loss. Recommended value for the step size η is 0.2 (default value) as suggested in https://arxiv.org/pdf/1905.03871.pdf.

postprocess_one_user(*, stats, user_context)#

Postprocess the user local statistics by appending clipping indicator.

Return type:

Tuple[TrainingStatistics, Metrics]

postprocess_server(*, stats, central_context, aggregate_metrics)#

Postprocess the aggregated statistics by adding Gaussian noise, popping clipping indicator and updating central clipping bound.

Parameters:
  • stats (TrainingStatistics) – Aggregated model updates with clipping indicator appended.

  • central_context (CentralContext) – CentralContext with clipping bound to be updated.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (popped_statistics, metrics): popped_statistics is the input model update with clipping indicator popped. metrics is a dict description: value where value contains the noisy norm quantile aggregated from devices and the updated norm bound.

postprocess_server_live(*, stats, central_context, aggregate_metrics)#

Postprocess the aggregated statistics by popping clipping indicator and updating central clipping bound.

Parameters:
  • stats (TrainingStatistics) – Aggregated model updates with clipping indicator appended.

  • central_context (CentralContext) – CentralContext with clipping bound to be updated.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (popped_statistics, metrics): popped_statistics is the input model update with clipping indicator popped. metrics is a dict description: value where value contain the noisy norm quantile aggregated from devices and the updated norm bound.

classmethod construct_from_privacy_accountant(accountant, initial_clipping_bound, clipping_indicator_noise_stddev, adaptive_clipping_norm_quantile, log_space_step_size=0.2)#

Construct an instance of AdaptiveClippingGaussianMechanism from a privacy accountant.

Return type:

AdaptiveClippingGaussianMechanism

Approximate local DP with central DP#

Approximate local privacy mechanisms with a central implementation for speed.

class pfl.privacy.approximate_mechanism.SquaredErrorLocalPrivacyMechanism#

Abstract base class for a local privacy mechanism that knows its squared error. This can be used in two ways.

First, one can use this information to analyse mechanisms.

Second, the local mechanism can be approximated with a central mechanism with the same squared error. This central mechanism adds Gaussian noise to the sum of individual statistics, which is much faster in simulation. However, in live training, the local privacy mechanism should be applied on each device prior to sending the data back to the server for aggregation. Even if the distribution of the error on one individual contribution is not Gaussian, because of the central limit theorem on a reasonably-sized cohort the difference will usually not be noticeable.

abstract sensitivity_scaling(num_dimensions)#

Return scaling that needs to be applied to the output of constrain_sensitivity.

Parameters:

num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

Return type:

int

abstract sensitivity_squared_error(num_dimensions, l2_norm)#

Return the expected squared error that is caused by random behaviour of the constrain_sensitivity method. Note that this does not include error introduced by clipping. If add_noise scales the output of constrain_sensitivity, that scaling does not have to be included. Instead just include it in sensitivity_scaling.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm (float) – The L2 norm of the vector that this mechanism is applied on.

Return type:

float

abstract add_noise_squared_error(num_dimensions, cohort_size)#

Return the expected squared error that is caused by the add_noise method.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm – The L2 norm of the vector that this mechanism is applied on.

Return type:

float

get_squared_error(num_dimensions, l2_norm, cohort_size)#

Compute the expected squared error from applying this mechanism.

Parameters:
  • num_dimensions (int) – The number of dimensions of the vector that this mechanism is applied on.

  • l2_norm (float) – The L2 norm of the vector that this mechanism is applied on.

  • cohort_size (int) – The number of elements in the sum that this mechanism will be applied on. Set this to 1 for local privacy.

Return type:

float

approximate_as_central_mechanism()#

Return an approximation of this mechanism that can be used as a central mechanism. To use this, imagine that local_privacy is the privacy mechanism to be approximated:

central_privacy = local_privacy.approximate_as_central_mechanism() local_privacy = no_privacy

central_privacy can then be passed into the backend as a central privacy mechanism, which can significantly speed up simulations when using local DP without affecting the outcomes of the simulations.

Return type:

CentralPrivacyMechanism

Returns:

A central privacy mechanism that approximates the local privacy mechanism.

class pfl.privacy.approximate_mechanism.GaussianApproximatedPrivacyMechanism(local_mechanism)#

Approximated version of a local privacy mechanism that can be applied as a central mechanism. This can make simulations much faster (but cannot be used in live training).

To use this, imagine that local_mechanism is the privacy mechanism to be approximated and local_mechanism_config is its configuration:

central_mechanism = local_mechanism.approximate_as_central_mechanism()
central_mechanism_config = local_mechanism_config
local_mechanism = NoPrivacy()

central_mechanism can then be passed into the backend.

Parameters:

local_mechanism (SquaredErrorLocalPrivacyMechanism) – The local mechanism to be approximated.

postprocess_one_user(*, stats, user_context)#

Do any postprocessing of client’s statistics before it is communicated back to the server.

Parameters:
  • stats (TrainingStatistics) – Statistics returned from the local training procedure of this user.

  • user_context (UserContext) – Additional information about the current user.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

postprocess_server(*, stats, central_context, aggregate_metrics)#

Do any postprocessing of the aggregated statistics object after central aggregation.

Parameters:
  • stats (TrainingStatistics) – The aggregated statistics.

  • central_context (CentralContext) – Information about aggregation and other useful server-side properties.

Return type:

Tuple[TrainingStatistics, Metrics]

Returns:

A tuple (transformed_stats, metrics), where transformed_stats is stats after it is processed by the postprocessor, and metrics is any new metrics to track. Default implementation does nothing.

DP metrics#

Maintain and compute aggregate SNR metrics.

class pfl.privacy.privacy_snr.SNRMetric(signal_l2_norm, squared_error)#

A signal-to-noise metric for the Gaussian mechanism.

The “signal” is defined as the L2 norm of all statistics after clipping but before adding the DP noise. Thus, the maximum value of the signal is equal to the norm clipping bound. The “noise” is defined as the L2 norm of the vector of standard deviations of noise added to each statistic. Since the noise added to each parameter has the same standard deviation noise_stddev, the overall noise is defined as sqrt(num_dimensions) * noise_stddev.

All objects of type SNRMetric form a commutative monoid (with + as the operator). Intermediate values maintain two values. The first is the sum of L2 norms from each data vector. Note that the implicit assumption is that each data vector is in the same direction, which is usually an overestimate. The second is the sum of expected squared errors. Note that the expected standard deviation of the noise can be computed from this, but it does not sum. (This is the reason that local DP with the Gaussian mechanism can be useful at all.)

Parameters:
  • signal_l2_norm (float) – The L2 norm of the data vector.

  • squared_error (float) – The expected squared L2 error that the mechanism has added to the signal. This is summed over all elements of the vector, i.e. it is equal to num_dimensions * noise_stddev**2 where noise_stddev is the standard deviation of Gaussian noise added to each statistic.

property signal_l2_norm: float#

The (summed) L2 norm of the data vectors before adding noise.

property squared_error: float#

The summed variance of the Gaussian noise that is added.

property overall_value: float#

Return the overall value, e.g. an average or a total.

to_vector()#

Get a vector representation of this metric value, with dtype=float32. Summing two vectors in this space must be equivalent as summing the two original objects.

This serializes only the signal norm and the noise variance. The dimensionality is assumed to match.

Return type:

ndarray

from_vector(vector)#

Create a new metric value of this class from a vector representation.

Return type:

SNRMetric

DP utilities#

Compute parameters for the Gaussian mechanism.

pfl.privacy.compute_parameters.AnalyticGM_robust_impl(eps, delta)#

Compute (optimally) the noise parameter (sigma) for the Gaussian mechanism, for a given epsilon and delta.

This assumes the L2 sensitivity is 1.

Implements Algorithm 1 from Balle and Wang (2018), “Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising”. arXiv:1805.06530

Parameters:
  • eps – The ε parameter of approximate differential privacy.

  • delta – The δ parameter of approximate differential privacy.

pfl.privacy.compute_parameters.AnalyticGM_robust(eps, delta, k=1.0, l2=1)#

Compute (optimally) the noise parameter (sigma) for the Gaussian mechanism, for a given epsilon and delta.

Parameters:
  • eps – The ε parameter of approximate differential privacy.

  • delta – The δ parameter of approximate differential privacy.

  • k – The number of repetitions. Note that it might be advantageous to use moments accountant instead of this when k > 1.

  • l2 – The L2 sensitivity.