-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SafetensorError: Error while deserializing header: HeaderTooLarge #9
Comments
Got the same issue |
@ShaunXZ @Pirog17000 This issue may help, I also met this problem before and I solved it by re-downloaded the model (please check whether the base model and safetensor are downloaded correctly, if the filesize is too small, it should be problematic). By the way, can you share your colab link so that I can take a look for you? |
@haofanwang Thank you for your quick response. I double checked the downloaded safetensor file and it seems to have the right size (over 100Mb). Below is the colab used to the test this script: Thanks, |
my issue is resolved with updating diffusers. since I run it locally, my steps were: and no reinstall or update flags were helpful, straight-forward uninstall-install. no more issues, works well. |
@Pirog17000 Hi, I tried your method in colab and it still didn't work... Could you take a look at the colab link above? Thank you! |
+1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running |
+1 +1 +1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path) |
According to: issue3367 |
i meet this issue, |
same issue here, any solution yet? |
I think ur solution is right
and it will work |
however, i met the error: |
Hi,
I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!
Below is the script that gives the above error. Env: google colab.
The text was updated successfully, but these errors were encountered: