You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
While taking time away from my workstation, I decided to dust up my old notebook (i5-9300H, GTX1050, 3Gb Vram, 16Gb Ram) to take it with me. In the process, I decided to quantize some of my favorite SDXL models so that I could use them in my notebook. As it turned out, I enjoyed generating the images far more than I had imagined. Then it occurred to me that, for people with less than 8Gb Vram which is generally the norm outside of the AI sphere, Fooocus should be the tool that brings the joy of AI image generation without the need to have an 8Gb or a better computer.
With Flux on the scene, it feels like SDXL is relegated to the second tier. But the resource requirement is quite large. As far as I know, the vast majority of people don't have a GPU with 8Gb Vram or higher. So why not use this opportunity to make Fooocus a bridge for people to experience the joy of AI image generation?
Q5_K_S SDXL fine-tune is at 1.76Gb while Q4_K_S SDXL fine-tune is only 1.45Gb making it possible for anyone with a GPU on their machine to run it.
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
While taking time away from my workstation, I decided to dust up my old notebook (i5-9300H, GTX1050, 3Gb Vram, 16Gb Ram) to take it with me. In the process, I decided to quantize some of my favorite SDXL models so that I could use them in my notebook. As it turned out, I enjoyed generating the images far more than I had imagined. Then it occurred to me that, for people with less than 8Gb Vram which is generally the norm outside of the AI sphere, Fooocus should be the tool that brings the joy of AI image generation without the need to have an 8Gb or a better computer.
With Flux on the scene, it feels like SDXL is relegated to the second tier. But the resource requirement is quite large. As far as I know, the vast majority of people don't have a GPU with 8Gb Vram or higher. So why not use this opportunity to make Fooocus a bridge for people to experience the joy of AI image generation?
Q5_K_S SDXL fine-tune is at 1.76Gb while Q4_K_S SDXL fine-tune is only 1.45Gb making it possible for anyone with a GPU on their machine to run it.
The text was updated successfully, but these errors were encountered: