Skip to content

Commit

Permalink
research edits
Browse files Browse the repository at this point in the history
  • Loading branch information
cvachha committed Jan 1, 2024
1 parent 6417370 commit 7511358
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion dreamcrafter_progress.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ show_tile: false
</ul>

## Overview
For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my VR/AR class in my first semester (Fall 2023) of my 5th year masters program (1 yr program), to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing.
For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my first semester (Fall 2023) of my 5th year masters program (1 yr program), two prototype systems were implemented which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation as part of a group I led for our final project in our VR/AR course. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing.

Integration of NeRF/3DGS and diffusion models into user-friendly platforms for 3D environment creation is still in its infancy. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations.

Expand Down
8 changes: 4 additions & 4 deletions research.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ menu-show: true
<header class="major">
<h3>Instruct-GS2GS</h3>
</header>
<p style="font-size: 12pt">Authors: **Cyrus Vachha** and Ayaan Haque (2023)</p>
<p style="font-size: 12pt">Authors: <b>Cyrus Vachha</b> and Ayaan Haque (2023)</p>
<p style="font-size: 12pt">We propose a method for editing 3D Gaussian Splatting (3DGS) scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
<br>
- Paper coming soon
Expand All @@ -59,7 +59,7 @@ menu-show: true
<header class="major">
<h3>Nerfstudio Blender VFX Add-on</h3>
</header>
<p style="font-size: 12pt">Authors: **Cyrus Vachha** (2023)</p>
<p style="font-size: 12pt">Author: <b>Cyrus Vachha</b> (2023)</p>
<p style="font-size: 12pt">We present a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs. Our approach involves using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. Our NeRF Blender add-on allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production.
<br>
- Shown in CVPR 2023 Art Gallery
Expand All @@ -86,7 +86,7 @@ menu-show: true
<header class="major">
<h3>StreamFunnel</h3>
</header>
<p style="font-size: 12pt">Authors: Haohua Lyu, **Cyrus Vachha**, Qianyi Chen, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2023)</p>
<p style="font-size: 12pt">Authors: Haohua Lyu, <b>Cyrus Vachha</b>, Qianyi Chen, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2023)</p>
<p style="font-size: 12pt">The increasing adoption of Virtual Reality (VR) systems in different domains have led to a need to support interaction between many spectators and a VR user. This is common in game streaming, live performances, and webinars. Prior CSCW systems for VR environments are limited to small groups of users. In this work, we identify problems associated with interaction carried out with large groups of users. To address this, we introduce an additional user role: the co-host. They mediate communications between the VR user and many spectators. To facilitate this mediation, we present StreamFunnel, which allows the co-host to be part of the VR application's space and interact with it. The design of StreamFunnel was informed by formative interviews with six experts. StreamFunnel uses a cloud-based streaming solution to enable remote co-host and many spectators to view and interact through standard web browsers, without requiring any custom software. We present results of informal user testing which provides insights into StreamFunnel's ability to facilitate these scalable interactions. Our participants, who took the role of a co-host, found that StreamFunnel enables them to add value in presenting the VR experience to the spectators and relaying useful information from the live chat to the VR user.</p>
<ul class="actions">
<li><a href="https://arxiv.org/abs/2311.14930" class="button">View Publication (Arxiv)</a></li>
Expand All @@ -106,7 +106,7 @@ menu-show: true
<header class="major">
<h3>WebTransceiVR (CHI 22)</h3>
</header>
<p style="font-size: 12pt">Authors: Haohua Lyu, **Cyrus Vachha**, Qianyi Chen, Odysseus Pyrinis, Avery Liou, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2022)</p>
<p style="font-size: 12pt">Authors: Haohua Lyu, <b>Cyrus Vachha</b>, Qianyi Chen, Odysseus Pyrinis, Avery Liou, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2022)</p>
<p style="font-size: 12pt">We propose WebTransceiVR, an asymmetric collaboration toolkit which when integrated into a VR application, allows multiple non-VR users to share the virtual space of the VR user. It allows external users to enter and be part of the VR application’s space through standard web browsers on mobile and computers. WebTransceiVR also includes a cloud-based streaming solution that enables many passive spectators to view the scene through any of the active cameras. We conduct informal user testing to gain additional insights for future work.</p>
<ul class="actions">
<li><a href="https://dl.acm.org/doi/abs/10.1145/3491101.3519816" class="button">View Publication (ACM)</a></li>
Expand Down

0 comments on commit 7511358

Please sign in to comment.