Schneider
Schneider

Reputation: 67

Operations performed on the communications between the server and clients

Part of federated learning research is based on operations performed on the communications between the server and clients such as dropping part of the updates (drop some gradients describing a model) exchanged between clients and server or discarding an update from a specific client in a certain communication round. I want to know if such capabilities are supported by Tensorflow-federated (TFF) framework and how they are supported because, from a first look, it seems to me the level of abstraction of TFF API does not allow such operations. Thank you.

Upvotes: 2

Views: 228

Answers (1)

Zachary Garrett
Zachary Garrett

Reputation: 2941

TFF's language design intentionally avoids a notion of client identity; there is desire to avoid making a "Client X" addressable and discarding its update or sending it different data.

However, there may be a way to run simulations of the type of computations mentioned. TFF does support expressing the following:

  • Computations that condition on properties of tensors, for example ignore an update that has nan values. One way this could be accomplished would be by writing a tff.tf_computation that conditionally zeros out the weight of updates before tff.federated_mean. This technique is used in tff.learning.build_federated_averaing_process()

  • Simulations that run a different computations on different sets of clients (where a set maybe a single client). Since the reference executor parameterizes clients by the data they posses, a writer of TFF could write two tff.federated_computations, apply them to different simulation data, and combine the results.

Upvotes: 1

Related Questions