You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello... I was wondering, how is this project different from Big Sleep ?
Does it improve it in some way?
I am curious to know because I was actually trying to find similar projects to Big Sleep, because unfortunately my GPU's V-RAM is not enough to run it, and so I was looking for some different, kind of more "modest" implementation... :)
When I say that I'm low on VRAM I mean, desperately low 😅 (2 Gb !) BUT! - because I successfully managed to run Deep Daze which works similarly using CLIP and a SIREN in place of a BigGAN - then, I haven't lost all hope yet...! :)
So yeah, if you have any pointers about your implementation and how it differs from Big Sleep, and most importantly if there could be a way to tune it for working on such a low VRAM amount (even at ridiculously low resolutions, doesn't matter), that would be hugely appreciated!
Also, yes, I know I could use Google Colabs! But witnessing the ML magic happening right within your Machine, makes it for a totally different experience... ;)
And yeah I want to get a new, decent GPU as soon as possible! But I'd feel so stupid to pay 3 times its actual cost just because the IT hardware market it's fucked up (and keeps staying like so...)
So...
Now you know all :)
The text was updated successfully, but these errors were encountered:
Hello... I was wondering, how is this project different from Big Sleep ?
Does it improve it in some way?
I am curious to know because I was actually trying to find similar projects to Big Sleep, because unfortunately my GPU's V-RAM is not enough to run it, and so I was looking for some different, kind of more "modest" implementation... :)
When I say that I'm low on VRAM I mean, desperately low 😅 (2 Gb !) BUT! - because I successfully managed to run Deep Daze which works similarly using CLIP and a SIREN in place of a BigGAN - then, I haven't lost all hope yet...! :)
So yeah, if you have any pointers about your implementation and how it differs from Big Sleep, and most importantly if there could be a way to tune it for working on such a low VRAM amount (even at ridiculously low resolutions, doesn't matter), that would be hugely appreciated!
Also, yes, I know I could use Google Colabs! But witnessing the ML magic happening right within your Machine, makes it for a totally different experience... ;)
And yeah I want to get a new, decent GPU as soon as possible! But I'd feel so stupid to pay 3 times its actual cost just because the IT hardware market it's fucked up (and keeps staying like so...)
So...
Now you know all :)
The text was updated successfully, but these errors were encountered: