You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, I encountered an error when running it: ValueError: It seems like you have activated sequential model offloading by calling enable_sequential_cpu_offload, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline .to('cpu') or consider removing the move altogether if you use sequential offloading.
Strangely, I have never manually set sequential model offloading. And I found that if I didn't call
Instead, directly set self.transformer = self.base_transformer, and the subsequent code remains unchanged, the above error will not occur.
If there's someone who has encountered the same issue?
When I was doing infer, I loaded the pre-trained weights using the following method:
and loaded the pipeline in the following way:
However, I encountered an error when running it:
ValueError: It seems like you have activated sequential model offloading by calling
enable_sequential_cpu_offload
, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline.to('cpu')
or consider removing the move altogether if you use sequential offloading.Strangely, I have never manually set sequential model offloading. And I found that if I didn't call
Instead, directly set
self.transformer = self.base_transformer
, and the subsequent code remains unchanged, the above error will not occur.If there's someone who has encountered the same issue?
System Info
diffusers==0.33.0.dev0
peft==0.14.0
@yiyixuxu @sayakpaul @DN6
The text was updated successfully, but these errors were encountered: