-
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
9 changed files
with
192 additions
and
11 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
--- | ||
layout: post | ||
title: Dreamcrafter (In Progress) | ||
|
||
show_tile: false | ||
--- | ||
|
||
<ul class="actions"> | ||
<li><a href="research.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> | ||
|
||
## Overview | ||
We propose Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system addresses this gap by proposing a system that harnesses the immersive experience and spatial interactions of VR, coupled with the advanced capabilities of generative AI, to enhance the process of 3D environment creation and editing. | ||
|
||
NeRF and diffusion models offer unparalleled realism and detail in rendering, however, their integration into user-friendly platforms for 3D environment creation is still in its infancy. | ||
Dreamcrafter aims to bridge these gaps by creating a seamless and intuitive interface for users to design, modify, and generate complex and photorealistic 3D environments. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations. | ||
The core of our approach is a VR-based system that allows users to interact with and manipulate 3D objects and environments in real-time. Dreamcrafter involves two subsystems systems which leverage novel 3D representations and stable diffusion. The stable diffusion powered system assigns semantically mapped spatial tags to 3D primitive objects to generate stable diffusion previews of scenes. Our second subsystem leverages NeRFs and 3D Gaussian Splatting for rendering and editing of 3D photo realistic scenes. Dreamcrafter is designed to be simple to use, lowering the barrier to entry for users without extensive experience in 3D modeling, while still providing realistic output results. | ||
|
||
|
||
Our current system is | ||
|
||
We developed a few prototype systems within a month | ||
## Prototype Demos | ||
Here are some demo videos of our prototype systems that demonstrate a basic version of some of the key features/interactions. | ||
|
||
## Future plans | ||
|
||
|
||
Here is our in-class assignment short paper. | ||
|
||
|
||
|
||
<ul class="actions"> | ||
<li><a href="research.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
--- | ||
layout: post | ||
title: Nerfstudio Contributions | ||
|
||
show_tile: false | ||
--- | ||
|
||
<ul class="actions"> | ||
<li><a href="research.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> | ||
|
||
Since Jan 2023 I have made contributions to the Nerfstudio API system including some features and small improvements to other parts. I am also acknowledged in the Nerfstudio SIGGRAPH paper. | ||
|
||
## Nerfstudio Blender VFX Add-on | ||
|
||
<iframe width="560" height="315" src="https://www.youtube.com/embed/A7La8tWp_0I?si=uChvOIFJ7WniBMTY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
|
||
I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well. | ||
|
||
The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths. | ||
|
||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">v0.1.16 Released 🎉<br><br>New Blender integration for VFX workflows!<a href="https://twitter.com/hashtag/NeRF?src=hash&ref_src=twsrc%5Etfw">#NeRF</a> <a href="https://twitter.com/hashtag/nerfacto?src=hash&ref_src=twsrc%5Etfw">#nerfacto</a> <a href="https://twitter.com/hashtag/VFX?src=hash&ref_src=twsrc%5Etfw">#VFX</a> <a href="https://twitter.com/hashtag/Blender3d?src=hash&ref_src=twsrc%5Etfw">#Blender3d</a> <a href="https://t.co/uU8QO1AWwU">pic.twitter.com/uU8QO1AWwU</a></p>— nerfstudio (@nerfstudioteam) <a href="https://twitter.com/nerfstudioteam/status/1618868366072229888?ref_src=twsrc%5Etfw">January 27, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> | ||
|
||
I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it. | ||
|
||
<iframe width="560" height="315" src="https://www.youtube.com/embed/vDhj6j7kfWM?si=zmlFcZoxZipyTEqs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
|
||
<!-- | ||
I also have a blog post style walkthrough of making it. | ||
<ul class="actions"> | ||
<li><a href="nerfstudio_vfx_blender.html" class="button small">Read More</a></li> | ||
</ul> | ||
--> | ||
|
||
<br> | ||
## 🥽 VR Video Rendering | ||
|
||
I implemented VR180 and VR360 (Omnidirectional stereo) render cameras to support VR video rendering. This allows users to render stereo equirectangular videos to view on VR headsets or post on YouTube. Documentation is [here](https://docs.nerf.studio/quickstart/custom_dataset.html#render-vr-video). | ||
The Blender add-on is used to create the final render path and to correctly scale the NeRF to real-world size since the scanned NeRF coordinate system is arbitrary. | ||
|
||
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZOQMIXvgLtw?si=ujYTHYzeoT5vVUIT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
|
||
## Additional contributions | ||
I have also made smaller contributions towards Nerfstudio including towards [Viser](https://viser.studio/), the new 3D viewer that Nerfstudio uses, as well as adding Nerfstudio support for [Instruct-GS2GS](https://docs.nerf.studio/nerfology/methods/igs2gs.html). | ||
|
||
|
||
<ul class="actions"> | ||
<li><a href="research.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
--- | ||
layout: post | ||
title: Nerstudio VFX Blender Add-on | ||
|
||
show_tile: false | ||
--- | ||
|
||
<ul class="actions"> | ||
<li><a href="experimentationNeural.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> | ||
|
||
## Nerfstudio Blender VFX Add-on | ||
|
||
<iframe width="560" height="315" src="https://www.youtube.com/embed/A7La8tWp_0I?si=uChvOIFJ7WniBMTY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
|
||
## Overview | ||
|
||
I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well. | ||
|
||
The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths. | ||
|
||
## Implementation Details | ||
|
||
For generating the JSON camera path, we iterate over the scene frame sequence (from the start to the end with step intervals) and get the camera 4x4 world matrix at each frame. The world transformation matrix gives the position, rotation, and scale of the camera. We then obtain the world matrix of the NeRF representation at each frame and transform the camera coordinates with this to get the final camera world matrix. This allows us to re-position, rotate, and scale the NeRF representation in Blender and generate the right camera path to render the NeRF accordingly in Nerfstudio. Additionally, we calculate the FOV of the camera at each frame based on the sensor fit (horizontal or vertical), angle of view, and aspect ratio. Next, we construct the list of keyframes which is very similar to the world matrices of the transformed camera matrix. Camera properties in the JSON file are based on user specified fields such as resolution (user specified in Output Properties in Blender), camera type (Perspective or Equirectangular). In the JSON file, aspect is specified as 1.0, smoothness_value is set to 0, and is_cycle is set to false. The Nerfstudio render is the fps specified in Blender where the duration is the total number of frames divided by the fps. Finally, we construct the full JSON object and write it to the file path specified by the user. | ||
|
||
For generating the camera from the JSON file, we create a new Blender camera based on the input file and iterate through the camera_path field in the JSON to get the world matrix of the object from the matrix_to_world and similarly get the FOV from the fov fields. At each iteration, we set the camera to these parameters and insert a keyframe based on the position, rotation, and scale of the camera as well as the focal length of the camera based on the vertical FOV input. | ||
|
||
|
||
|
||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">v0.1.16 Released 🎉<br><br>New Blender integration for VFX workflows!<a href="https://twitter.com/hashtag/NeRF?src=hash&ref_src=twsrc%5Etfw">#NeRF</a> <a href="https://twitter.com/hashtag/nerfacto?src=hash&ref_src=twsrc%5Etfw">#nerfacto</a> <a href="https://twitter.com/hashtag/VFX?src=hash&ref_src=twsrc%5Etfw">#VFX</a> <a href="https://twitter.com/hashtag/Blender3d?src=hash&ref_src=twsrc%5Etfw">#Blender3d</a> <a href="https://t.co/uU8QO1AWwU">pic.twitter.com/uU8QO1AWwU</a></p>— nerfstudio (@nerfstudioteam) <a href="https://twitter.com/nerfstudioteam/status/1618868366072229888?ref_src=twsrc%5Etfw">January 27, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> | ||
|
||
I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it. | ||
|
||
<iframe width="560" height="315" src="https://www.youtube.com/embed/vDhj6j7kfWM?si=zmlFcZoxZipyTEqs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||
|
||
I also have a blog post style walkthrough of making it. | ||
<ul class="actions"> | ||
<li><a href="nerfstudio_vfx_blender.html" class="button small">Read More</a></li> | ||
</ul> | ||
|
||
|
||
|
||
<ul class="actions"> | ||
<li><a href="research.html" class="button small">Go back and see more research projects</a></li> | ||
</ul> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters