You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why does the CPU memory usage increase after each training epoch? As a result, I have to stop and resume the checkpoint training after several epochs. Is it because of {train : dataloader : "persistent_workers" : true} in the configuration file?
The text was updated successfully, but these errors were encountered:
________________________________
From: CriDora ***@***.***>
Sent: Wednesday, August 14, 2024 22:38
To: open-mmlab/Amphion ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [open-mmlab/Amphion] [Help]: The training memory usage of valle_v2 on libritts dataset train-360 and train-100 increases. (Issue #263)
Why does the CPU memory usage increase after each training epoch? As a result, I have to stop and resume the checkpoint training after several epochs. Is it because of {train : dataloader : "persistent_workers" : true} in the configuration file?
—
Reply to this email directly, view it on GitHub<#263>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AX6GUH7C527XPXAJNRVMVU3ZRNTYHAVCNFSM6AAAAABMQP3LDKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ3DMMBUGU4TGOA>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
Why does the CPU memory usage increase after each training epoch? As a result, I have to stop and resume the checkpoint training after several epochs. Is it because of {train : dataloader : "persistent_workers" : true} in the configuration file?
The text was updated successfully, but these errors were encountered: