Skip to content

pipeline fail to move to "cuda" if one of the component is PeftModel #10403

Open
@Erisura

Description

@Erisura

When I was doing infer, I loaded the pre-trained weights using the following method:

self.transformer = PeftModel.from_pretrained(
            self.base_transformer,
            lora_model_path
        )

and loaded the pipeline in the following way:

pipe = FluxControlNetPipeline(transformer=transformer, ......)

However, I encountered an error when running it:
ValueError: It seems like you have activated sequential model offloading by calling enable_sequential_cpu_offload, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline .to('cpu') or consider removing the move altogether if you use sequential offloading.

Strangely, I have never manually set sequential model offloading. And I found that if I didn't call

self.transformer =  PeftModel.from_pretrained(
            self.base_transformer,
            lora_model_path
        )

Instead, directly set self.transformer = self.base_transformer, and the subsequent code remains unchanged, the above error will not occur.
If there's someone who has encountered the same issue?

System Info

diffusers==0.33.0.dev0
peft==0.14.0

@yiyixuxu @sayakpaul @DN6

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds-code-exampleWaiting for relevant code example to be providedstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions