You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you please tell me what is the typical GPU cards that you are using and their specification?
On this machine, I have a RTX 3060 Ti with 8GB memory but it does not seem to be enough to run your Unsupervised STDP-based spiking neural network and I keep running out of memory on the GPU.
I am working my way through the wonder Brain Cog code and examples. This project show the most promise that I have found for truly developing a full virtual brain simulation that will hopefully be able to learn incrementally and online. I have investigated MANY other solutions and one that had good potential was the SOINN by Hasegawa that uses unsupervised learning to incrementally learn.
Also, I am wondering if you have considered adding Liquid time-constant Networks (LTCs) which are supposed to be a significant improvement over classic RNN's in that they evolve and self-adjust to optimal solutions.
Thank you for your interest in our work and for your keen attention to BrainCog. If you exceed the maximum memory capacity of GPU, you can choose to reduce the time 'T' of the linear layer or decrease the batch size to an appropriate range. Additionally, regarding LTC, we will look into it when we have some spare time.
Hello,
Can you please tell me what is the typical GPU cards that you are using and their specification?
On this machine, I have a RTX 3060 Ti with 8GB memory but it does not seem to be enough to run your Unsupervised STDP-based spiking neural network and I keep running out of memory on the GPU.
https://github.com/BrainCog-X/Brain-Cog/tree/main/examples/Perception_and_Learning/UnsupervisedSTDP
I am working my way through the wonder Brain Cog code and examples. This project show the most promise that I have found for truly developing a full virtual brain simulation that will hopefully be able to learn incrementally and online. I have investigated MANY other solutions and one that had good potential was the SOINN by Hasegawa that uses unsupervised learning to incrementally learn.
Also, I am wondering if you have considered adding Liquid time-constant Networks (LTCs) which are supposed to be a significant improvement over classic RNN's in that they evolve and self-adjust to optimal solutions.
https://github.com/raminmh/liquid_time_constant_networks
PAPER: https://arxiv.org/abs/2006.04439
Thanks and have a great day
The text was updated successfully, but these errors were encountered: