Is it possible to render small components of large scene in very high detail? #1363
Unanswered
AlbertoOddness
asked this question in
Q&A
Replies: 1 comment 1 reply
-
I had some success with reducing the "cone angle" under "NeRF training options" to a lower value. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone!
I've been using instant-ngp for a while with some success. Now I'm trying to do something specific which I've not really seen many examples of, and I'm curious if it's possible to do or not.
I have a scene, in real life it's approximately 10 meters long, 4 meters wide and 2 meters tall. I've done this scene a few times before and it works pretty good, in general. However, there are some small objects in the scene (about 15cm x 15 cm x 5 cm) with text on them which I want to make sure to be able to render in high detail. This is because some of those have text, and I would want it to be readable.
Some of the letters have a size of about 5 cm height, those have decent detail and are very readable. However, text that is smaller than that becomes blurry and hard to read.
In my dataset I included plenty of close ups of these objects, from different angles. It consists of a total of 62 images, with 26 out of those 62 being close ups, and the remaining 36 being general pictures of the scene. I only use ~60-70 pictures at the moment due to hardware limitations.
So what I want to understand, can instant-ngp do something like this? Or is trying to capture very small details on a large scene a limitation of this project? If it's not, how could I achieve something like this?
I've tweaked many variables to see how they affect the output, and checked plenty of tutorials. I try to make sure that the whole scene is inside the unit cube, or at least the objects which I want to get high detail out of. I've tried to set the scale parameter to be very small, but when I do so the scene starts getting a lot of "fog", which I imagine is because not everything within the unit cube is covered by a camera. When I make the scale larger, there's less fog and the image is much clearer, however some parts of my scene end up outside the unit cube and aren't rendered very well.
I'd be very thankful if somebody could explain if what I want to achieve is possible, or if they could point me at resources to learn better about this.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions