Replies: 12 comments 44 replies
-
Your article is simply amazing. I am currently researching how Zluda works in various software I use, and you are like the lighthouse of night navigation. Thank you. |
Beta Was this translation helpful? Give feedback.
-
@Grey3016 OSError: [WinError 126] The specified module could not be found. Error loading "E:\Fooocus\python_embeded\lib\site-packages\torch\lib\caffe2_nvrtc.dll" or one of its dependencies. plus zluda folder only has cublas.dll, cusparse.dll, and nvrtc64.dll, when these replace cublas64_11.dll, cusparse64_11.dll, and nvrtc64_112_0.dll in "python_embeded\Lib\site-packages\torch\lib" , do I need to rename change ublas.dll, cusparse.dll, and nvrtc64.dll to cublas64_11.dll, cusparse64_11.dll, and nvrtc64_112_0.dll in "python_embeded\Lib\site-packages\torch\lib"? |
Beta Was this translation helpful? Give feedback.
-
I got this:
|
Beta Was this translation helpful? Give feedback.
-
I tested twice: the code from steps 5 - 8 is reverted after performing step 9. It's need to change the order. |
Beta Was this translation helpful? Give feedback.
-
i spent close to 3 hours doing this on my windows bootcamp and first the run bat just opened and closed, then i go tsomething to do something but loads of errors and just closed |
Beta Was this translation helpful? Give feedback.
-
I tried following the instructions but could not get Fooocus to work with zluda. I have zluda working with Automatic1111 and SD_NEXT, so most of the required stuff should already be there. OSError: [WinError 126] The specified module could not be found. Error loading "E:\Fooocus\python_embeded\lib\site-packages\torch\lib\caffe2_nvrtc.dll" or one of its dependencies. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I hope this publication is still alive. |
Beta Was this translation helpful? Give feedback.
-
I am running the Fooocus application through Google Colab, but the input image feature has not been working for a while. |
Beta Was this translation helpful? Give feedback.
-
might you know how to get this working on Linux by chance? i downloaded stable diffusion NEXT with Zluda working and its pretty fast but i have no idea how to port it to fooocus |
Beta Was this translation helpful? Give feedback.
-
@Grey3016 you say that This guide has been superseded by a dedicated Zluda Forge release but that guide you are referring to is for stock stable diffusion and not fooocus as far as i can see if this is what your talking about. if thats not it then what page might you be talking about? |
Beta Was this translation helpful? Give feedback.
-
Fooocus with ZLUDA still works. I updated my Guide for the latest Version. |
Beta Was this translation helpful? Give feedback.
-
Firstly, this guide is more for current users of ZLuda on SDNext or elsewhere (or new fork of Forge with ZLuda). The main errors in it not working in my observation is the setting of PATHs - it must be correct or it goes tits up pronto.
I would advise using Notepad ++ for editing the python files, as it's far easier to ensure code is aligned (misalignment of code can also mess it up).
Given the extra installs at the start it's not practical to fork this - I lack the python skills.
1.If ZLuda runs ok on SDNext following https://github.com/vladmandic/automatic/wiki/ZLUDA
2.Download Fooocus & unzip https://github.com/lllyasviel/Fooocus
3.Navigate to "Fooocus/ldm_patched/modules/"
4.Open "model_management.py" in preferably notepad ++
5.Find this line (line 216)
FORCE_FP32 = False
6.Change line to
FORCE_FP32 = True
(this allows the ChatGPT2 / Fooocus Expansion V2 to work - you can if you want alternately add a startup argument for this)
7.Find these lines (259-262)
8.Change the lines above by copy/pasting the following over them (this stop errors for cuda functions that don't work yet, enable bits that do and disables pytorch attention as ZLuda doesn't support it )
9.Install needed torch requirements after uninstalling the ones that don't work - open cmd window from main Fooocus folder with Imbeded folder
10.As per the older ZLuda instructions replace cublas64_11.dll, cusparse64_11.dll, and nvrtc64_112_0.dll in "python_embeded\Lib\site-packages\torch\lib" with the ones from the ZLUDA
folder (replace and rename - checkout out the ZLuda wiki).
Start Fooocus by any of the 3 bat files, it should not need
--use-zluda
to start . It will download the files it wants for each of the 3 bat files as needed.PATHs checks, open a cmd window and type
zluda
hipinfo
Start by running any of the startup up bat files (edited in as this was missing). No startup arguments needed - add Fooocus ones if you wish
'ZLuda' should return text about its arguments
HIPinfo should return details of the installed HIP files - if it doesn't, you have an 'issue' with your PATH to them
This was another proof on concept project for me - it's far slower (by its/s anyway or use Lightning models) than SDNext on my 7900xtx (SDNext 7.5its/s & Fooocus 1.2 - 2.2 its/s depending on speed setting).
Speed Increase
Use Turbo, Lightning etc models
On step 6, leave it at fp16 BUT disable Fooocus Expansion V2 (or it crashes)
ALL credit and kudos to Vosen, lshqqytiger, BrknSoul & LeagueRaNi
Beta Was this translation helpful? Give feedback.
All reactions