Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running FedMA with large input data shape #5

Open
jefersonf opened this issue Jul 31, 2020 · 0 comments
Open

Running FedMA with large input data shape #5

jefersonf opened this issue Jul 31, 2020 · 0 comments

Comments

@jefersonf
Copy link

Hi @hwang595, a few weeks ago I made some questions in another issue thread about I problem that I had when trying to train a model with input image shape greater or equal to 224x224. Since then, I tried to reduce the dimensions of my problem to the default size, i.e. 32x32, and it worked well! But when I run using 224x224, I'm still locked in this training part.

So I'm gonna ask my questions here again:

  • Is there such a relationship? Training input size and FedMA communication process? If that's true, what can we do about it?
  • By adding a different model, in which part of the code should I take care? Besides changing, for example, the input dimensions to 1x224x224?

Obs.: As I'm working with medical images it is critical resize them.

Thanks for the great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant