How to load the weight weight from checkpoint while we didn't self.save_hyparams(.) ? #19290
-
As the title, i wonder what I should do to only load the model weight as the starting point of finetuneing. Secondly, I try to trust the automatic saving mechanism (without any callback, any self.hyparams, torch.save something), and I can find the checkpoint under the folder. So, I don't know what the checkpoint contains, but I assume it contains everything including model weights. when I execute the following code, it doesn't give any error and run smoothly, but I afraid the pre-trained weight doesn't be loaded in modelA and modelB.
Since the torch-lightning document described :
So, will the pretrained model be overrides by the random init modelA, modelB weights ?
Besides, if I want to conduct finetuneing, how can I only load the pretrained weight (modelA, modelB) and discard the other info, including lr states, epochs states, ... ? Any suggestion will be appreciated!! ( PS. the PL document in Checkpoint Loading seems not helpful in this case |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
I have a similar question waiting for answers. |
Beta Was this translation helpful? Give feedback.
-
I think no.
and the discription of |
Beta Was this translation helpful? Give feedback.
I think no.
The lightning module
load_from_checkpoint
method is like this:and the discription of
strict
is:strict: Whether to strictly enforce that the keys in :attr:`checkpoint_path` match the keys returned by this module's state dict.
Since the your lightning module
ckpt
contains both two models (you can print your ckpt to check this), and defaultstrict
isTrue
…