Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Flower Baseline: Ditto #3756

Open
6 of 14 tasks
oscardilley opened this issue Jul 9, 2024 · 11 comments
Open
6 of 14 tasks

Add Flower Baseline: Ditto #3756

oscardilley opened this issue Jul 9, 2024 · 11 comments
Labels
feature request This issue or comment suggests an additional feature. good first issue Good for newcomers part: baselines Add or update baseline stale If issue/PR hasn't been updated within 3 weeks.

Comments

@oscardilley
Copy link

oscardilley commented Jul 9, 2024

Paper

"Ditto: Fair and Robust Federated Learning Through Personalization" by Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith

Link

https://arxiv.org/abs/2012.04221

Maybe give motivations about why the paper should be implemented as a baseline.

From what I can see it was proposed for the summer of reproducibility but has not yet been contributed:

#2026 (comment)

I have implemented this as a part of other research and would like to contribute the work.

Is there something else you want to add?

No response

Implementation

To implement this baseline, it is recommended to do the following items in that order:

For first time contributors

Prepare - understand the scope

  • Read the paper linked above
  • Decide which experiments you'd like to reproduce. The more the better!
  • Follow the steps outlined in Add a new Flower Baseline.
  • You can use as reference other baselines that the community merged following those steps.

Verify your implementation

  • Follow the steps indicated in the EXTENDED_README.md that was created in your baseline directory
  • Ensure your code reproduces the results for the experiments you chose
  • Ensure your README.md is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?
  • Ensure running the formatting and typing tests for your baseline runs without errors.
  • Clone your repo on a new directory, follow the guide on your own README.md and verify everything runs.
@jafermarq
Copy link
Contributor

Hi @oscardilley your are right, Ditto was one of the baselines proposed for Summer of Reproducibility but it wasn't completed. Would you like to add it to Flower Baselines? Do you need any help?

@oscardilley
Copy link
Author

Hey @jafermarq, yes, I would really like to contribute it as a baseline - I have started reconfiguring it into the required format. I think I am getting on well so far, but is this the best channel to ask any future questions I may have?

@oscardilley
Copy link
Author

oscardilley commented Jul 17, 2024 via email

@oscardilley
Copy link
Author

Hi, @jafermarq, please could you advise on this?

@mercurius80
Copy link

Hi @oscardilley, I am new here but work already in ML Field with flower. I would like to contribute. Do you need any help? Where can I start?
Cheers

@WilliamLindskog WilliamLindskog added stale If issue/PR hasn't been updated within 3 weeks. part: baselines Add or update baseline feature request This issue or comment suggests an additional feature. and removed new baseline labels Dec 11, 2024
@WilliamLindskog
Copy link
Contributor

Hi @oscardilley, @mercurius80,

Just checking in here. Are there any updates or is there anything we can do to help?

@WilliamLindskog
Copy link
Contributor

It seems that in their repo, they treat the learning rate as one (local and global are the same), I can't find code that distinguishes between them both. 20 clients seem to be default but that they sample 10 of them (fraction 50%). Local iterations seem to be set to 2 and something called finetune-iters is 40. You might have already seen this?

@oscardilley
Copy link
Author

oscardilley commented Jan 29, 2025

It seems that in their repo, they treat the learning rate as one (local and global are the same), I can't find code that distinguishes between them both. 20 clients seem to be default but that they sample 10 of them (fraction 50%). Local iterations seem to be set to 2 and something called finetune-iters is 40. You might have already seen this?

Hi @WilliamLindskog, thanks for reaching out. Sorry for the delay on this, I have been busy with work. Good spot on those parameters. Following up from a previous question that I had - for it to be an acceptable reproduction/ baseline, how many/ which of the plots from the paper would it be ideal to reproduce?

I am hoping to have some time to pick this back up in April.

@WilliamLindskog
Copy link
Contributor

@jafermarq any thoughts on how many plots? I'd say the more the merrier, however Figure 4 and 6 should be included I'd say.

@jafermarq
Copy link
Contributor

@oscardilley @WilliamLindskog with Baselines we typically aim to have the core results reproduced. In the case of Ditto I see those in Table 1. Having a plot out of these (or even additional experiments is nice) but probably is better to focus on the table. It should be fine having adversaries ratios 20% and 80%. How does that sound ?

@oscardilley
Copy link
Author

oscardilley commented Jan 29, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request This issue or comment suggests an additional feature. good first issue Good for newcomers part: baselines Add or update baseline stale If issue/PR hasn't been updated within 3 weeks.
Projects
None yet
Development

No branches or pull requests

4 participants