Skip to content

Code for Paper "Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning"

Notifications You must be signed in to change notification settings

zhanz5/Poisoning-Attack-on-FL

Repository files navigation

README

We have provided a simple demonstration of F-FMPA (free to migrate it to other FL scenarios). It aims to launch precise model poisoning attacks (MPAs) in federated learning.

If you have any questions, feel free to discuss them in the issue.

Test:

  • run **FMPA.py ** for attacking federated learning.

Dependencies:

python==3.6.13

pytorch==1.10.2

torchvision==0.9.0

numpy==1.19.5

pandas==1.1.5

pickleshare==0.7.4

We introduce three attack primitives:

图片1

About

Code for Paper "Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages