Skip to content

Commit

Permalink
page edits
Browse files Browse the repository at this point in the history
  • Loading branch information
cvachha committed Jan 2, 2024
1 parent 7511358 commit e8d2ad5
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 7 deletions.
12 changes: 8 additions & 4 deletions dreamcrafter_progress.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ show_tile: false
</ul>

## Overview
For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my first semester (Fall 2023) of my 5th year masters program (1 yr program), two prototype systems were implemented which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation as part of a group I led for our final project in our VR/AR course. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing.
For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my first semester (Fall 2023) of my 5th year masters program (1 yr program), two prototype systems were implemented which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation as part of a group I led for our final project in our VR/AR course. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing.

Integration of NeRF/3DGS and diffusion models into user-friendly platforms for 3D environment creation is still in its infancy. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations.

Expand All @@ -21,18 +21,22 @@ Our current system is a proof-of-concept prototype which we developed within a m
## Prototype Demos
Here are some demo videos of our prototype systems that demonstrate a basic version of some of the key features/interactions.

<video id="v0" width=700px autoplay loop muted controls>
<center>
<video id="v0" width=700px controls>
<source src="assets/videos/cs294_137_dreamcrafter_progress_vid.mp4" type="video/mp4" />
</video>
</center>

[Here]((https://cvachha.github.io/assets/pdfs/cs294_137_dreamcrafter_VR_final_paper.pdf)) is our in-class paper write-up on the system
<iframe src="assets/pdfs/cs294_137_dreamcrafter_VR_final_paper.pdf" width="100%" height="500px">
</iframe>

Here is a NeRF/GS capture of our poster demo in our class:
Here is a NeRF/GS capture of our class poster demo:
<center>
<iframe src="https://lumalabs.ai/embed/afc9f2d5-a1bc-4681-914a-5dd157938e33?mode=sparkles&background=%23ffffff&color=%23000000&showTitle=true&loadBg=true&logoPosition=bottom-left&infoPosition=bottom-right&cinematicVideo=undefined&showMenu=true" width="600" height="500" frameborder="0" title="luma embed" style="border: none; border-radius: 20px"></iframe>
</center>

## Future plans
## Next steps
We are working towards a publication for UIST 24 and I am currently exploring incorporating other systems such as room based gpt LLM interactions for more intelligent selection and automatic object segmentation from an input NeRF/GS to edit specific objects.


Expand Down
3 changes: 2 additions & 1 deletion nerfstudio_contributions.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ I also have a blog post style walkthrough of making it.
-->

<br>

## 🥽 VR Video Rendering

I implemented VR180 and VR360 (Omnidirectional stereo) render cameras to support VR video rendering. This allows users to render stereo equirectangular videos to view on VR headsets or post on YouTube. Documentation is [here](https://docs.nerf.studio/quickstart/custom_dataset.html#render-vr-video).
Expand All @@ -41,7 +42,7 @@ The Blender add-on is used to create the final render path and to correctly scal
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZOQMIXvgLtw?si=ujYTHYzeoT5vVUIT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

## Additional contributions
I have also made smaller contributions towards Nerfstudio including towards [Viser](https://viser.studio/), the new 3D viewer that Nerfstudio uses, as well as adding Nerfstudio support for [Instruct-GS2GS](https://docs.nerf.studio/nerfology/methods/igs2gs.html).
I have also made smaller contributions towards Nerfstudio including towards [Viser](https://viser.studio/), the new 3D viewer that Nerfstudio uses, as well as adding Nerfstudio support for [Instruct-GS2GS](https://docs.nerf.studio/nerfology/methods/igs2gs.html). I was part of a group that implemented equirectangular image/video input support for Nerfstudio datasets.


<ul class="actions">
Expand Down
4 changes: 2 additions & 2 deletions research.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,9 @@ menu-show: true
<h3>Dreamcrafter (In Progress)</h3>
</header>
<p style="font-size: 12pt">In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently developing the complete system with intelligent natural language region selection and additional features. We are working towards a research publication for 2024.</p>
<!--<ul class="actions">
<ul class="actions">
<li><a href="dreamcrafter_progress.html" class="button">Learn More</a></li>
</ul>-->
</ul>
</div>
</div>
</section>
Expand Down

0 comments on commit e8d2ad5

Please sign in to comment.