Reputation: 924
I'm using Tensorflow Federated (TFF) to train with differential privacy. Currently I am creating a Tensorflow Privacy NormalizedQuery and then passing it into a TFF DifferentiallyPrivateFactory to create an AggregationProcess:
_weights_type = tff.learning.framework.weights_type_from_model(placeholder_model)
query = tensorflow_privacy.GaussianSumQuery(l2_norm_clip=10.0, stddev=0.1)
query = tensorflow_privacy.NormalizedQuery(query, 20)
agg_proc = tff.aggregators.DifferentiallyPrivateFactory(query)
agg_proc = agg_proc.create(_weights_type.trainable)
After broadcasting the server state to clients I run a client update function and then use the AggregationProcess like this:
agg_output = agg_proc.next(
server_state.delta_aggregate_state,
client_outputs.weights_delta)
This works great, however I want to experiment with changing the l2_norm_clip and stddev several times during training (making clipping bigger and smaller at various training rounds) but it seems I can only set these parameters when I create the AggregationProcess.
Is is possible to change these parameters during training somehow?
Upvotes: 1
Views: 102
Reputation: 151
I can think of two ways to do what you want: the easy way and the right way.
The right way is to make a new type of DPQuery that keeps track of the training round in its global state and adjusts the clip and stddev the way you want in its get_noised_result
function. Then you can pass this new DPQuery to tff.aggregators.DifferentiallyPrivateFactory
and use it like normal.
The easy way is to directly hack into the server_state.delta_aggregate_state
. Somewhere in there you should find the global state of the DPQuery which should contain the l2_norm_clip
and stddev
which you can manipulate directly between rounds. This approach may be brittle because the aggregator state and the DPQuery state representations may be subject to change.
Upvotes: 2