Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch_enhancement #51

Merged
merged 2 commits into from
Dec 12, 2022
Merged

batch_enhancement #51

merged 2 commits into from
Dec 12, 2022

Conversation

dario-coscia
Copy link
Collaborator

Solving #33

@ndem0 ndem0 linked an issue Dec 6, 2022 that may be closed by this pull request
pina/pinn.py Outdated
Comment on lines 214 to 215
data_loader = self._create_dataloader()
data_loader_loop = data_loader
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dataset creation is just a view of the original tensors or do they implicitly perform new allocations?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you be more specific?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is allocating twice the same data (input_pts and dataloader) or are they just references to same memory? Have you check that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from my understanding points are allocated in input_pts then in create dataloader we return a dict of iterables, each iterable takes a dataset as input (one for each key of the dict). A dataset has _tensors as private variable, so it is creating a shallow copy of input_pts[key], which is needed for the __get_item__ method.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how I thought to handle different datasets (one for each key), for creating the dataloader and maintaining the same train routine. Notice that allocation is done on CPU, the tensors are moved to GPU in batches anyway. In practice input_pts is always on CPU (even when trained on GPU).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Umh, I'm lost since you're saying it's just a shallow copy, and few lines below you state that there are actually two copies of the same tensor.
From my knowledge, I'm not even sure it's a shallow copy. Maybe we can merge this PR and asap I gonna retouch the span_pts method in order to directly create the DataSet and always use such a data structure. Does it sound good for you?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok for me, we can put it as a new issue and I can help in trying to fix it.

@dario-coscia
Copy link
Collaborator Author

Everything should now be in a class, the code is more readable and names are more self explanatory

@ndem0 ndem0 merged commit dbd78c9 into mathLab:master Dec 12, 2022
@dario-coscia dario-coscia deleted the batch_enhanc branch December 13, 2022 09:01
dario-coscia added a commit that referenced this pull request Mar 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Batching
2 participants