Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

finetune_taiyi_stable_diffusion 最简单的办法 - 是否可以把 CLIPTextModel 与 CLIPTokenizer 训练模型单独训练并发布在 huggingface上 , 后面用下面代码即可调用 #462

Open
gg22mm opened this issue Jun 21, 2024 · 0 comments

Comments

@gg22mm
Copy link

gg22mm commented Jun 21, 2024

finetune_taiyi_stable_diffusion 最简单的办法 - 是否可以把 CLIPTextModel 与 CLIPTokenizer 训练模型单独训练并发布在 huggingface上 , 后面用下面代码即可调用:

import torch
from diffusers import StableDiffusionPipeline,UNet2DConditionModel
from transformers import CLIPTextModel,CLIPTokenizer

//方式二、
text_encoder = CLIPTextModel.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/text_encoder")
tokenizer = CLIPTokenizer.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/tokenizer")

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", text_encoder=text_encoder,tokenizer=tokenizer)
pipe.to("cpu")

image = pipe(prompt="一只猫咪").images[0]
image.save("cat2.png")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant