-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can This Be Used in Real Time? #46
Comments
This was amazing in its time but if you're starting now, probably better to look at DenseDepth or Mitacs for that. |
Thank you so much! I'll look into those. |
I am only a novice, I checked out DenseDepth and it's not apparent to me how I would get a 3D video output |
Look at RunwayML |
Alright, I'll look at that now. Thanks again for your help! |
Do you have an idea which model could help. I wasn't able to find a model that converts video into 3D. The application I am trying to use this for requires live 2D video from a camera to be rendered into 3D and then viewed on a VR headset |
It's possible to do that but it would mean several fiddly pipeline steps and a massive GPU to get a decent framerate at the end of it. Much easier to use a live feed from a depth camera for that, like Kinect or RealSense. |
Hmmm, alright. I should have access to some pretty beefy GPU muscle at my university if I have to choose that route. The whole point of this project is to create a VR microscope, so I'm not sure if using a depth camera would be an option, because I was going to use a variable zoom microscope camera. Thank you for the advice! |
Can this algorithm be used to render a 2D video stream into 3D in real time?
The text was updated successfully, but these errors were encountered: