Skip to content

Commit

Permalink
updates on research related pages
Browse files Browse the repository at this point in the history
  • Loading branch information
cvachha committed Dec 30, 2023
1 parent 9ee74ab commit 98ce881
Show file tree
Hide file tree
Showing 7 changed files with 32 additions and 13 deletions.
Binary file modified .DS_Store
Binary file not shown.
Binary file modified assets/images/igs2gs_face.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/nerfstudio_logo.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions dreamcrafter_progress.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ show_tile: false
</ul>

## Overview
For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html).
We propose Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system addresses this gap by proposing a system that harnesses the immersive experience and spatial interactions of VR, coupled with the advanced capabilities of generative AI, to enhance the process of 3D environment creation and editing.

NeRF and diffusion models offer unparalleled realism and detail in rendering, however, their integration into user-friendly platforms for 3D environment creation is still in its infancy.
Expand Down
32 changes: 25 additions & 7 deletions experimentationNeural.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
layout: post
title: Experimentation with NeRFs,Neural Rendering, and Virtual Production
title: Experimentation with NeRFs, Neural Rendering, and Virtual Production

show_tile: false
---
Expand All @@ -12,23 +12,39 @@ show_tile: false
<div class="content">
<div class="inner">
<header class="major">
<h3>VR NeRF Environment Creation System</h3>
<h3>VR NeRF Environment Creation System Proposal (2022)</h3>
</header>
<p>Outline of current research project on creating a NeRF creation system for VR</p>
<p>Outline propsal of current research project on creating a NeRF creation system for VR created in 2022</p>
<ul class="actions">
<li><a href="nerfenvironmentcreation.html" class="button">Learn more</a></li>
</ul>
</div>
</div>
</section>
<section>
<a href="https://docs.nerf.studio" class="image">
<img src="{% link assets/images/nerfstudio_logo.gif %}" alt="" data-position="center center" style="border-radius: 20px" />
</a>
<div class="content">
<div class="inner">
<header class="major">
<h3>Nerfstudio Contributions</h3>
</header>
<p style="font-size: 12pt">Since Jan 2023, I have been contributing features to the Nerfstudio system including the Blender VFX add-on and VR180/Omnidirectional (VR 360) video/image render outputs.</p>
<ul class="actions">
<li><a href="nerfstudio_contributions.html" class="button">Learn More</a></li>
</ul>
</div>
</div>
</section>
<section>
<a class="image">
<img src="{% link assets/images/virtualProductionBanner.PNG %}" alt="" data-position="center center" />
</a>
<div class="content">
<div class="inner">
<header class="major">
<h3>Virtual Production Experiments</h3>
<h3>Virtual Production Experiments (2020-2021)</h3>
</header>
<p>A few experiements including virtual real time backgrounds and virtual metahuman actors</p>
<ul class="actions">
Expand All @@ -44,9 +60,11 @@ show_tile: false
<div class="content">
<div class="inner">
<header class="major">
<h3>NeRF Gallery</h3>
<h3>NeRF Gallery (2022)</h3>
</header>
<p>Renders of select Neural Radiance Fields I captured from Luma Labs AI, NVidia Instant NeRF, and NerfStudio</p>
<p>Renders of select Neural Radiance Fields I captured from Luma Labs AI, NVidia Instant NeRF, and NerfStudio
<br>
Will soon be updated with a selection from (hundreds of) my 2023 captures</p>
<ul class="actions">
<li><a href="nerf_gallery.html" class="button">Learn more</a></li>
</ul>
Expand All @@ -59,7 +77,7 @@ show_tile: false
<header class="major">
<h3>Background of Interest</h3>
</header>
<p>Blog style post explaining the timeline of my interest and motivation in NeRFs, lightfields, and neural rendering</p>
<p>Blog style post explaining the timeline of my interest and motivation in NeRFs, lightfields, and neural rendering I wrote in 2022</p>
<ul class="actions">
<li><a href="nerfbackgroundinfo.html" class="button">Learn more</a></li>
</ul>
Expand Down
4 changes: 2 additions & 2 deletions nerfstudio_contributions.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ Since Jan 2023 I have made contributions to the Nerfstudio API system including

<iframe width="560" height="315" src="https://www.youtube.com/embed/A7La8tWp_0I?si=uChvOIFJ7WniBMTY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well.
I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well.

The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths.

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">v0.1.16 Released 🎉<br><br>New Blender integration for VFX workflows!<a href="https://twitter.com/hashtag/NeRF?src=hash&amp;ref_src=twsrc%5Etfw">#NeRF</a> <a href="https://twitter.com/hashtag/nerfacto?src=hash&amp;ref_src=twsrc%5Etfw">#nerfacto</a> <a href="https://twitter.com/hashtag/VFX?src=hash&amp;ref_src=twsrc%5Etfw">#VFX</a> <a href="https://twitter.com/hashtag/Blender3d?src=hash&amp;ref_src=twsrc%5Etfw">#Blender3d</a> <a href="https://t.co/uU8QO1AWwU">pic.twitter.com/uU8QO1AWwU</a></p>&mdash; nerfstudio (@nerfstudioteam) <a href="https://twitter.com/nerfstudioteam/status/1618868366072229888?ref_src=twsrc%5Etfw">January 27, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it.
I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic examples using the add-on as well as a breakdown of other effects that can be done with it.

<iframe width="560" height="315" src="https://www.youtube.com/embed/vDhj6j7kfWM?si=zmlFcZoxZipyTEqs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Expand Down
8 changes: 4 additions & 4 deletions research.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ menu-show: true
</header>
<p style="font-size: 12pt">We propose a method for editing 3D Gaussian Splatting (3DGS) scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
<br>
- Paper comming soon
- Paper coming soon
<br>
- Nerfstudio integration supported
</p>
Expand Down Expand Up @@ -131,16 +131,16 @@ menu-show: true
<header class="major">
<h3>Dreamcrafter (In Progress)</h3>
</header>
<p style="font-size: 12pt">In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently devloping the complete system with intelligent natural langue region selection and additional features. We are working towards a research publication for 2024.</p>
<p style="font-size: 12pt">In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently developing the complete system with intelligent natural language region selection and additional features. We are working towards a research publication for 2024.</p>
<!--<ul class="actions">
<li><a href="dreamcrafter_progress.html" class="button">Learn More</a></li>
</ul> -->
</ul>-->
</div>
</div>
</section>
<section>
<a href="https://docs.nerf.studio" class="image">
<img src="{% link assets/images/nerfstudio_logo.png %}" alt="" data-position="center center" style="border-radius: 20px" />
<img src="{% link assets/images/nerfstudio_logo.gif %}" alt="" data-position="center center" style="border-radius: 20px" />
</a>
<div class="content">
<div class="inner">
Expand Down

0 comments on commit 98ce881

Please sign in to comment.