Description
When I was doing infer, I loaded the pre-trained weights using the following method:
self.transformer = PeftModel.from_pretrained(
self.base_transformer,
lora_model_path
)
and loaded the pipeline in the following way:
pipe = FluxControlNetPipeline(transformer=transformer, ......)
However, I encountered an error when running it:
ValueError: It seems like you have activated sequential model offloading by calling enable_sequential_cpu_offload
, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline .to('cpu')
or consider removing the move altogether if you use sequential offloading.
Strangely, I have never manually set sequential model offloading. And I found that if I didn't call
self.transformer = PeftModel.from_pretrained(
self.base_transformer,
lora_model_path
)
Instead, directly set self.transformer = self.base_transformer
, and the subsequent code remains unchanged, the above error will not occur.
If there's someone who has encountered the same issue?
System Info
diffusers==0.33.0.dev0
peft==0.14.0