pipeline fail to move to "cuda" if one of the component is PeftModel #10403
Labels
bug
Something isn't working
needs-code-example
Waiting for relevant code example to be provided
stale
Issues that haven't received updates
When I was doing infer, I loaded the pre-trained weights using the following method:
and loaded the pipeline in the following way:
However, I encountered an error when running it:
ValueError: It seems like you have activated sequential model offloading by calling
enable_sequential_cpu_offload
, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline.to('cpu')
or consider removing the move altogether if you use sequential offloading.Strangely, I have never manually set sequential model offloading. And I found that if I didn't call
Instead, directly set
self.transformer = self.base_transformer
, and the subsequent code remains unchanged, the above error will not occur.If there's someone who has encountered the same issue?
System Info
diffusers==0.33.0.dev0
peft==0.14.0
@yiyixuxu @sayakpaul @DN6
The text was updated successfully, but these errors were encountered: