Skip to content

Commit

Permalink
Adding research pages
Browse files Browse the repository at this point in the history
  • Loading branch information
cvachha committed Dec 29, 2023
1 parent 1b41528 commit 9ee74ab
Show file tree
Hide file tree
Showing 9 changed files with 192 additions and 11 deletions.
Binary file added assets/images/dreamcrafter_progress_teaser.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/nerfstudio_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/pdfs/nerf_vfx_abstract.pdf
Binary file not shown.
35 changes: 35 additions & 0 deletions dreamcrafter_progress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
layout: post
title: Dreamcrafter (In Progress)

show_tile: false
---

<ul class="actions">
<li><a href="research.html" class="button small">Go back and see more research projects</a></li>
</ul>

## Overview
We propose Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system addresses this gap by proposing a system that harnesses the immersive experience and spatial interactions of VR, coupled with the advanced capabilities of generative AI, to enhance the process of 3D environment creation and editing.

NeRF and diffusion models offer unparalleled realism and detail in rendering, however, their integration into user-friendly platforms for 3D environment creation is still in its infancy.
Dreamcrafter aims to bridge these gaps by creating a seamless and intuitive interface for users to design, modify, and generate complex and photorealistic 3D environments. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations.
The core of our approach is a VR-based system that allows users to interact with and manipulate 3D objects and environments in real-time. Dreamcrafter involves two subsystems systems which leverage novel 3D representations and stable diffusion. The stable diffusion powered system assigns semantically mapped spatial tags to 3D primitive objects to generate stable diffusion previews of scenes. Our second subsystem leverages NeRFs and 3D Gaussian Splatting for rendering and editing of 3D photo realistic scenes. Dreamcrafter is designed to be simple to use, lowering the barrier to entry for users without extensive experience in 3D modeling, while still providing realistic output results.


Our current system is

We developed a few prototype systems within a month
## Prototype Demos
Here are some demo videos of our prototype systems that demonstrate a basic version of some of the key features/interactions.

## Future plans


Here is our in-class assignment short paper.



<ul class="actions">
<li><a href="research.html" class="button small">Go back and see more research projects</a></li>
</ul>
2 changes: 1 addition & 1 deletion gallery.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ layout: landing
title: Gallery
description: 3D renders, 3D models, 3D Captures
image: assets/images/galleryImage.png
show_tile: true
show_tile: false
---

<!-- Main -->
Expand Down
49 changes: 49 additions & 0 deletions nerfstudio_contributions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
layout: post
title: Nerfstudio Contributions

show_tile: false
---

<ul class="actions">
<li><a href="research.html" class="button small">Go back and see more research projects</a></li>
</ul>

Since Jan 2023 I have made contributions to the Nerfstudio API system including some features and small improvements to other parts. I am also acknowledged in the Nerfstudio SIGGRAPH paper.

## Nerfstudio Blender VFX Add-on

<iframe width="560" height="315" src="https://www.youtube.com/embed/A7La8tWp_0I?si=uChvOIFJ7WniBMTY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well.

The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths.

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">v0.1.16 Released 🎉<br><br>New Blender integration for VFX workflows!<a href="https://twitter.com/hashtag/NeRF?src=hash&amp;ref_src=twsrc%5Etfw">#NeRF</a> <a href="https://twitter.com/hashtag/nerfacto?src=hash&amp;ref_src=twsrc%5Etfw">#nerfacto</a> <a href="https://twitter.com/hashtag/VFX?src=hash&amp;ref_src=twsrc%5Etfw">#VFX</a> <a href="https://twitter.com/hashtag/Blender3d?src=hash&amp;ref_src=twsrc%5Etfw">#Blender3d</a> <a href="https://t.co/uU8QO1AWwU">pic.twitter.com/uU8QO1AWwU</a></p>&mdash; nerfstudio (@nerfstudioteam) <a href="https://twitter.com/nerfstudioteam/status/1618868366072229888?ref_src=twsrc%5Etfw">January 27, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it.

<iframe width="560" height="315" src="https://www.youtube.com/embed/vDhj6j7kfWM?si=zmlFcZoxZipyTEqs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<!--
I also have a blog post style walkthrough of making it.
<ul class="actions">
<li><a href="nerfstudio_vfx_blender.html" class="button small">Read More</a></li>
</ul>
-->

<br>
## 🥽 VR Video Rendering

I implemented VR180 and VR360 (Omnidirectional stereo) render cameras to support VR video rendering. This allows users to render stereo equirectangular videos to view on VR headsets or post on YouTube. Documentation is [here](https://docs.nerf.studio/quickstart/custom_dataset.html#render-vr-video).
The Blender add-on is used to create the final render path and to correctly scale the NeRF to real-world size since the scanned NeRF coordinate system is arbitrary.

<iframe width="560" height="315" src="https://www.youtube.com/embed/ZOQMIXvgLtw?si=ujYTHYzeoT5vVUIT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

## Additional contributions
I have also made smaller contributions towards Nerfstudio including towards [Viser](https://viser.studio/), the new 3D viewer that Nerfstudio uses, as well as adding Nerfstudio support for [Instruct-GS2GS](https://docs.nerf.studio/nerfology/methods/igs2gs.html).


<ul class="actions">
<li><a href="research.html" class="button small">Go back and see more research projects</a></li>
</ul>
45 changes: 45 additions & 0 deletions nerfstudio_vfx_blender.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
layout: post
title: Nerstudio VFX Blender Add-on

show_tile: false
---

<ul class="actions">
<li><a href="experimentationNeural.html" class="button small">Go back and see more research projects</a></li>
</ul>

## Nerfstudio Blender VFX Add-on

<iframe width="560" height="315" src="https://www.youtube.com/embed/A7La8tWp_0I?si=uChvOIFJ7WniBMTY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

## Overview

I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well.

The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths.

## Implementation Details

For generating the JSON camera path, we iterate over the scene frame sequence (from the start to the end with step intervals) and get the camera 4x4 world matrix at each frame. The world transformation matrix gives the position, rotation, and scale of the camera. We then obtain the world matrix of the NeRF representation at each frame and transform the camera coordinates with this to get the final camera world matrix. This allows us to re-position, rotate, and scale the NeRF representation in Blender and generate the right camera path to render the NeRF accordingly in Nerfstudio. Additionally, we calculate the FOV of the camera at each frame based on the sensor fit (horizontal or vertical), angle of view, and aspect ratio. Next, we construct the list of keyframes which is very similar to the world matrices of the transformed camera matrix. Camera properties in the JSON file are based on user specified fields such as resolution (user specified in Output Properties in Blender), camera type (Perspective or Equirectangular). In the JSON file, aspect is specified as 1.0, smoothness_value is set to 0, and is_cycle is set to false. The Nerfstudio render is the fps specified in Blender where the duration is the total number of frames divided by the fps. Finally, we construct the full JSON object and write it to the file path specified by the user.

For generating the camera from the JSON file, we create a new Blender camera based on the input file and iterate through the camera_path field in the JSON to get the world matrix of the object from the matrix_to_world and similarly get the FOV from the fov fields. At each iteration, we set the camera to these parameters and insert a keyframe based on the position, rotation, and scale of the camera as well as the focal length of the camera based on the vertical FOV input.



<blockquote class="twitter-tweet"><p lang="en" dir="ltr">v0.1.16 Released 🎉<br><br>New Blender integration for VFX workflows!<a href="https://twitter.com/hashtag/NeRF?src=hash&amp;ref_src=twsrc%5Etfw">#NeRF</a> <a href="https://twitter.com/hashtag/nerfacto?src=hash&amp;ref_src=twsrc%5Etfw">#nerfacto</a> <a href="https://twitter.com/hashtag/VFX?src=hash&amp;ref_src=twsrc%5Etfw">#VFX</a> <a href="https://twitter.com/hashtag/Blender3d?src=hash&amp;ref_src=twsrc%5Etfw">#Blender3d</a> <a href="https://t.co/uU8QO1AWwU">pic.twitter.com/uU8QO1AWwU</a></p>&mdash; nerfstudio (@nerfstudioteam) <a href="https://twitter.com/nerfstudioteam/status/1618868366072229888?ref_src=twsrc%5Etfw">January 27, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it.

<iframe width="560" height="315" src="https://www.youtube.com/embed/vDhj6j7kfWM?si=zmlFcZoxZipyTEqs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

I also have a blog post style walkthrough of making it.
<ul class="actions">
<li><a href="nerfstudio_vfx_blender.html" class="button small">Read More</a></li>
</ul>



<ul class="actions">
<li><a href="research.html" class="button small">Go back and see more research projects</a></li>
</ul>
21 changes: 20 additions & 1 deletion projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,28 @@ show-title: true
<header class="major">
<h2>Experimentation with NeRFs/Neural Rendering</h2>
</header>
<p>Various experiments with NeRFS, Neural Rendering, Relighting, Virtual Production, AI, and more</p>
<p>Various experiments with NeRFS, Neural Rendering, Relighting, Virtual Production, and more
Will be updated with 2023 projects soon.</p>
<ul class="actions">
<li><a href="experimentationNeural.html" class="button next">Learn more</a></li>
<li><a href="research.html" class="button next">View Research Projects Here</a></li>
</ul>
</div>
</div>
</section>
<section>
<a href="gallery.html" class="image">
<img src="{% link assets/images/galleryImage.png %}" alt="" data-position="center center" />
</a>
<div class="content">
<div class="inner">
<header class="major">
<h3>Gallery</h3>
</header>
<p>3D renders, 3D models, 3D Captures
Will be updated soon with NeRFs taken over the last year.</p>
<ul class="actions">
<li><a href="gallery.html" class="button">View more</a></li>
</ul>
</div>
</div>
Expand Down
51 changes: 42 additions & 9 deletions research.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ menu-show: true
<!-- Two -->
<div class="inner">

<section id="two" class="spotlights">
<section id="two" class="spotlights" >
<section>
<a href="https://instruct-gs2gs.github.io/" class="image">
<img src="{% link assets/images/igs2gs_face.gif %}" alt="" data-position="center center" valign="center" style="border-radius: 20px"/>
Expand All @@ -35,9 +35,9 @@ menu-show: true
</header>
<p style="font-size: 12pt">We propose a method for editing 3D Gaussian Splatting (3DGS) scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
<br>
Paper comming soon
- Paper comming soon
<br>
Nerfstudio integration supported
- Nerfstudio integration supported
</p>
<ul class="actions">
<li><a href="https://instruct-gs2gs.github.io/" class="button">Project Page</a></li>
Expand All @@ -58,14 +58,14 @@ menu-show: true
<header class="major">
<h3>Nerfstudio Blender VFX Add-on</h3>
</header>
<p style="font-size: 12pt">We present a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs. Our approach involves using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. Our NeRF Blender add-on allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene.This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production.
<p style="font-size: 12pt">We present a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs. Our approach involves using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. Our NeRF Blender add-on allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production.
<br>
Shown in CVPR 2023 Art Gallery
- Shown in CVPR 2023 Art Gallery
<br>
Arxiv submission in progress</p>
- Arxiv submission in progress</p>
<ul class="actions">
<li><a href="https://docs.nerf.studio/extensions/blender_addon.html" class="button">Documentation</a></li>
<li><a href="https://www.youtube.com/watch?v=A7La8tWp_0I" class="button">View Video</a></li>
<li><a href="https://www.youtube.com/watch?v=A7La8tWp_0I" class="button">Video</a></li>
<li><a href="https://drive.google.com/file/d/1y2xVk228dabXHfzNOPyNzSh8YRVYGmSc/view?usp=sharing" class="button">View Abstract</a></li>
</ul>
</div>
Expand Down Expand Up @@ -120,8 +120,41 @@ menu-show: true
<h2>Future/Current Research Projects</h2>
</header>
<p>I am also working on a project in capturing, creating, and viewing synthetic NeRF environments in VR for my masters thesis.</p>
<p>Since Jan 2023, I have been contributing features to the Nerfstudio system including the Blender VFX add-on and VR180/Omnidirectional (VR 360) video/image render outputs.</p>

<p></p>
<section id="two" class="spotlights">
<section>
<a href="https://docs.nerf.studio" class="image">
<img src="{% link assets/images/dreamcrafter_progress_teaser.png %}" alt="" data-position="center center" style="border-radius: 20px" />
</a>
<div class="content">
<div class="inner">
<header class="major">
<h3>Dreamcrafter (In Progress)</h3>
</header>
<p style="font-size: 12pt">In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently devloping the complete system with intelligent natural langue region selection and additional features. We are working towards a research publication for 2024.</p>
<!--<ul class="actions">
<li><a href="dreamcrafter_progress.html" class="button">Learn More</a></li>
</ul> -->
</div>
</div>
</section>
<section>
<a href="https://docs.nerf.studio" class="image">
<img src="{% link assets/images/nerfstudio_logo.png %}" alt="" data-position="center center" style="border-radius: 20px" />
</a>
<div class="content">
<div class="inner">
<header class="major">
<h3>Nerfstudio Contributions</h3>
</header>
<p style="font-size: 12pt">Since Jan 2023, I have been contributing features to the Nerfstudio system including the Blender VFX add-on and VR180/Omnidirectional (VR 360) video/image render outputs.</p>
<ul class="actions">
<li><a href="nerfstudio_contributions.html" class="button">Learn More</a></li>
</ul>
</div>
</div>
</section>
</section>
</div>
</section>

Expand Down

0 comments on commit 9ee74ab

Please sign in to comment.