You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your excellent work. Could you please provide the used TT dataset? By the way, would you like to share how you generate the displayed video?
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our work! I have uploaded the TnT dataset to Google Drive, and you can access it here. As mentioned in our Limitations part, our method utilizes a vanilla MLP, which will struggle to capture the complete geometry in relatively larger or more complex scenes.
For video generation, we utilized the approach from the MonoSDF repository, available here. The process involves: 1) obtaining a sparse trajectory, 2) interpolating between these specific camera positions to generate a continuous trajectory, 3) rendering images based on these camera positions, and 4) compiling these rendered images into a video.
For scenes with good 3D meshes, like Replica, we render them into 2D images using the target camera extrinsics and intrinsics, similar to the approach in this script. For our generated 3D edges, we first save them as point clouds and render them in a similar way.
For scenes without complete 3D meshes, like TnT, we first select a target RGB image and then adjust the camera parameters to render the generated 3D edges into 2D images, aligning them with the selected RGB image.
Thanks for sharing your excellent work. Could you please provide the used TT dataset? By the way, would you like to share how you generate the displayed video?
The text was updated successfully, but these errors were encountered: