You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When reading your paper and the former one pixel-splat, I noticed that in the datasets, poses are computed via Colmap-style softwares; so the poses through different scenes have got different scale factors. During training, is there any way or measure to take to align these scales?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi @DavidYan2001, thanks for your interest in our work.
Since RE10K does not provide official near and far planes, we empirically set the (near, far) as (1, 100), following our previous work MuRF. More details regarding the depth range can be found at #11 (comment). In other words, we do not specify the scale factor, instead, we only set the near and far depth planes and let the network (implicitly) learn the scale factor for each scene.
Thanks for your answer, one more question is just to make sure if the poses provided inside a dataset (like Real10k) own the same scale factors? Thanks!
Hi @DavidYan2001, we set the (near, far) as (1, 100) for the whole dataset, but the learned depth scales varied across scenes. You can export point clouds of several scenes and will notice that the learned scales are different for different scenes.
Dear author, thanks for your nice work!
When reading your paper and the former one pixel-splat, I noticed that in the datasets, poses are computed via Colmap-style softwares; so the poses through different scenes have got different scale factors. During training, is there any way or measure to take to align these scales?
Thanks!
The text was updated successfully, but these errors were encountered: