Attack Module References

federatedscope.attack.privacy_attacks

class federatedscope.attack.privacy_attacks.DLG(max_ite, lr, federate_loss_fn, device, federate_method, federate_lr=None, optim='Adam', info_diff_type='l2', is_one_hot_label=False)[source]

Implementation of the paper “Deep Leakage from Gradients”: https://papers.nips.cc/paper/2019/file/ 60a6c4002cc7b29142def8871531281a-Paper.pdf

References

Zhu, Ligeng, Zhijian Liu, and Song Han. “Deep leakage from gradients.” Advances in Neural Information Processing Systems 32 (2019).

Args:
  • max_ite (int): the max iteration number;

  • lr (float): learning rate in optimization based reconstruction;

  • federate_loss_fn (object): The loss function used in FL training;

  • device (str): the device running the reconstruction;

  • federate_method (str): The federated learning method;

  • federate_lr (float):The learning rate used in FL training;

default None. - optim (str): The optimization method used in reconstruction; default: “Adam”; supported: ‘sgd’, ‘adam’, ‘lbfgs’ - info_diff_type (str): The type of loss between the ground-truth gradient/parameter updates info and the reconstructed info; default: “l2” - is_one_hot_label (bool): whether the label is one-hot; default: False

get_original_gradient_from_para(model, original_info, model_para_name)[source]

Transfer the model parameter updates to gradient based on:

\[P_{t} = P - \eta g,\]

where \(P_{t}\) is the parameters updated by the client at current round; \(P\) is the parameters of the global model at the end of the last round; \(\eta\) is the learning rate of clients’ local training; \(g\) is the gradient

Parameters
  • model (-) – The model owned by the Server

  • original_info (-) – The model parameter updates received by

  • Server

  • model_para_name (-) – The list of model name. Be sure the

:param model_para_name is consistent with the the key name in: :param original_info:

Returns

  • original_gradient (list): the list of the gradient

corresponding to the model updates

reconstruct(model, original_info, data_feature_dim, num_class, batch_size)[source]

Reconstruct the original training data and label.

Parameters
  • model – The model used in FL; Type: object

  • original_info – The message received to perform reconstruction,

  • Type (generate the original_info;) – list

  • data_feature_dim – The feature dimension of dataset; Type: list

  • Tensor.Size (or) –

  • num_class – the number of total classes in the dataset; Type: int

  • batch_size – the number of samples in the batch that

  • Type – int

Returns

  • The reconstructed data (Tensor); Size: [batch_size,

data_feature_dim] - The reconstructed label (Tensor): Size: [batch_size]

class federatedscope.attack.privacy_attacks.GANCRA(target_label_ind, fl_model, device='cpu', dataset_name=None, noise_dim=100, batch_size=16, generator_train_epoch=10, lr=0.001, sav_pth='data/', round_num=- 1)[source]

The implementation of GAN based class representative attack. https://dl.acm.org/doi/abs/10.1145/3133956.3134012

References

Hitaj, Briland, Giuseppe Ateniese, and Fernando Perez-Cruz.

“Deep models under the GAN: information leakage from collaborative deep learning.” Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 2017.

Args:
  • target_label_ind (int): the label index whose representative

  • fl_model (object):

  • device (str or int): the device to run; ‘cpu’ or the device

index to select; default: ‘cpu’. - dataset_name (str): the dataset name; default: None - noise_dim (int): the dimension of the noise that fed into the generator; default: 100 - batch_size (int): the number of data generated into training; default: 16 - generator_train_epoch (int): the number of training steps when training the generator; default: 10 - lr (float): the learning rate of the generator training; default: 0.001 - sav_pth (str): the path to save the generated data; default: ‘data/’ - round_num (int): the FL round that starting the attack; default: -1.

generate_and_save_images()[source]

Save the generated data and the generator training loss

generator_loss(discriminator_output)[source]

Get the generator loss based on the discriminator’s output

Parameters

discriminator_output (Tensor) – the discriminator’s output; size: batch_size * n_class

Returns: generator_loss

update_discriminator(model)[source]

Copy the model of the server as the discriminator

Parameters

model (object) – the model in the server

Returns: the discriminator

class federatedscope.attack.privacy_attacks.InvertGradient(max_ite, lr, federate_loss_fn, device, federate_method, federate_lr=None, alpha_TV=0.001, info_diff_type='sim', optim='Adam', is_one_hot_label=False)[source]

The implementation of “Inverting Gradients - How easy is it to break privacy in federated learning?”. Link: https://proceedings.neurips.cc/paper/2020/hash/ c4ede56bbd98819ae6112b20ac6bf145-Abstract.html

References

Geiping, Jonas, et al. “Inverting gradients-how easy is it to break privacy in federated learning?.” Advances in Neural Information Processing Systems 33 (2020): 16937-16947.

Parameters
  • max_ite (-) – the max iteration number;

  • lr (-) – learning rate in optimization based reconstruction;

  • federate_loss_fn (-) – The loss function used in FL training;

  • device (-) – the device running the reconstruction;

  • federate_method (-) – The federated learning method;

  • federate_lr (-) – The learning rate used in FL training;

  • default (reconstructed info;) – None.

  • alpha_TV (-) – the hyper-parameter of the total variance

  • default – 0.001

  • info_diff_type (-) – The type of loss between the

  • the (ground-truth gradient/parameter updates info and) –

  • default – “l2”

  • optim (-) – The optimization method used in reconstruction;

  • default – “Adam”; supported: ‘sgd’, ‘adam’, ‘lbfgs’

  • info_diff_type – The type of loss between the

  • the

  • default – “l2”

  • is_one_hot_label (-) – whether the label is one-hot;

  • default – False

class federatedscope.attack.privacy_attacks.PassivePropertyInference(classier: str, fl_model_criterion, device, grad_clip, dataset_name, fl_local_update_num, fl_type_optimizer, fl_lr, batch_size=100)[source]

This is an implementation of the passive property inference (algorithm 3 in Exploiting Unintended Feature Leakage in Collaborative Learning: https://arxiv.org/pdf/1805.04049.pdf

add_parameter_updates(parameter_updates, prop)[source]
Parameters
  • parameter_updates – Tensor with dimension n * d_feature

  • prop – Tensor with dimension n * 1

Returns:

federatedscope.attack.worker_as_attacker

class federatedscope.attack.worker_as_attacker.BackdoorServer(ID=- 1, state=0, config=None, data=None, model=None, client_num=5, total_round_num=10, device='cpu', strategy=None, unseen_clients_id=None, **kwargs)[source]

For backdoor attacks, we will choose different sampling stratergies. fix-frequency, all-round ,or random sampling.

broadcast_model_para(msg_type='model_para', sample_client_num=- 1, filter_unseen_clients=True)[source]

To broadcast the message to all clients or sampled clients

Parameters
  • msg_type – ‘model_para’ or other user defined msg_type

  • sample_client_num – the number of sampled clients in the broadcast behavior. And sample_client_num = -1 denotes to broadcast to all the clients.

  • filter_unseen_clients – whether filter out the unseen clients that do not contribute to FL process by training on their local data and uploading their local model update. The splitting is useful to check participation generalization gap in [ICLR’22, What Do We Mean by Generalization in Federated Learning?] You may want to set it to be False when in evaluation stage

class federatedscope.attack.worker_as_attacker.PassivePIAServer(ID=- 1, state=0, data=None, model=None, client_num=5, total_round_num=10, device='cpu', strategy=None, **kwargs)[source]

The implementation of the batch property classifier, the algorithm 3 in paper: Exploiting Unintended Feature Leakage in Collaborative Learning

References:

Melis, Luca, Congzheng Song, Emiliano De Cristofaro and Vitaly Shmatikov. “Exploiting Unintended Feature Leakage in Collaborative Learning.” 2019 IEEE Symposium on Security and Privacy (SP) (2019): 691-706

callback_funcs_model_para(message: Message)[source]

The handling function for receiving model parameters, which triggers check_and_move_on (perform aggregation when enough feedback has been received). This handling function is widely used in various FL courses.

Parameters

message – The received message.

class federatedscope.attack.worker_as_attacker.PassiveServer(ID=- 1, state=0, data=None, model=None, client_num=5, total_round_num=10, device='cpu', strategy=None, state_to_reconstruct=None, client_to_reconstruct=None, **kwargs)[source]

In passive attack, the server store the model and the message collected from the client,and perform the optimization based reconstruction, such as DLG, InvertGradient.

callback_funcs_model_para(message: Message)[source]

The handling function for receiving model parameters, which triggers check_and_move_on (perform aggregation when enough feedback has been received). This handling function is widely used in various FL courses.

Parameters

message – The received message.

federatedscope.attack.worker_as_attacker.plot_target_loss(loss_list, outdir)[source]
Parameters
  • loss_list – the list of loss regrading the target data

  • outdir – the directory to store the loss

federatedscope.attack.auxiliary

federatedscope.attack.auxiliary.create_ardis_poisoned_dataset(data_path, base_label=7, target_label=1, fraction=0.1)[source]

creating the poisoned FEMNIST dataset with edge-case triggers we are going to label 7s from the ARDIS dataset as 1 (dirty label) load the data from csv’s We randomly select samples from the ardis dataset consisting of 10 class (digits number). fraction: the fraction for sampled data. images_seven_DA: the multiple transformation version of dataset

federatedscope.attack.auxiliary.get_data_info(dataset_name)[source]

Get the dataset information, including the feature dimension, number of total classes, whether the label is represented in one-hot version

Parameters

dataset_name – dataset name; str

Returns

data_feature_dim, num_class, is_one_hot_label

federatedscope.attack.auxiliary.get_generator(dataset_name)[source]

Get the dataset’s corresponding generator. :param dataset_name: The dataset name; Type: str

Returns

The generator; Type: object

federatedscope.attack.auxiliary.get_passive_PIA_auxiliary_dataset(dataset_name)[source]
Parameters

dataset_name (str) – dataset name

Returns

the auxiliary dataset for property inference attack. Type: dict

{

‘x’: array, ‘y’: array, ‘prop’: array

}

federatedscope.attack.auxiliary.get_reconstructor(atk_method, **kwargs)[source]
Parameters
  • atk_method – the attack method name, and currently supporting “DLG:

  • gradient" (deep leakage from) – Inverting gradient” ; Type: str

  • "IG (and) – Inverting gradient” ; Type: str

  • **kwargs – other arguments

Returns:

federatedscope.attack.auxiliary.iDLG_trick(original_gradient, num_class, is_one_hot_label=False)[source]

Using iDLG trick to recover the label. Paper: “iDLG: Improved Deep Leakage from Gradients”, link: https://arxiv.org/abs/2001.02610

Parameters
  • original_gradient – the gradient of the FL model; type: list

  • num_class – the total number of class in the data

  • is_one_hot_label – whether the dataset’s label is in the form of one

  • Type (hot.) – bool

Returns

The recovered label by iDLG trick.

federatedscope.attack.auxiliary.selectTrigger(img, height, width, distance, trig_h, trig_w, triggerType, load_path)[source]

return the img: np.array [0:255], (height, width, channel)

federatedscope.attack.trainer

federatedscope.attack.trainer.hood_on_fit_start_generator(ctx)[source]

count the FL training round before fitting :param ctx ():

Returns:

federatedscope.attack.trainer.hook_on_batch_forward_injected_data(ctx)[source]

inject the generated data into training batch loss :param ctx ():

Returns:

federatedscope.attack.trainer.hook_on_batch_injected_data_generation(ctx)[source]

generate the injected data

federatedscope.attack.trainer.wrap_GANTrainer(base_trainer: Type[GeneralTorchTrainer]) Type[GeneralTorchTrainer][source]

Warp the trainer for gan_based class representative attack.

Parameters

base_trainer – Type: core.trainers.GeneralTorchTrainer

Returns

The wrapped trainer; Type: core.trainers.GeneralTorchTrainer

federatedscope.attack.trainer.wrap_GaussianAttackTrainer(base_trainer: Type[GeneralTorchTrainer]) Type[GeneralTorchTrainer][source]

wrap the gaussian attack trainer

Parameters

base_trainer – Type: core.trainers.GeneralTorchTrainer

Returns

The wrapped trainer; Type: core.trainers.GeneralTorchTrainer

federatedscope.attack.trainer.wrap_GradientAscentTrainer(base_trainer: Type[GeneralTorchTrainer]) Type[GeneralTorchTrainer][source]

wrap the gradient_invert trainer

Parameters

base_trainer – Type: core.trainers.GeneralTorchTrainer

Returns

The wrapped trainer; Type: core.trainers.GeneralTorchTrainer

federatedscope.attack.trainer.wrap_benignTrainer(base_trainer: Type[GeneralTorchTrainer]) Type[GeneralTorchTrainer][source]

Warp the benign trainer for backdoor attack: We just add the normalization operation. :param base_trainer: Type: core.trainers.GeneralTorchTrainer

Returns

The wrapped trainer; Type: core.trainers.GeneralTorchTrainer