From 7511358862e5af1654db563538b4d13a0fad7ee3 Mon Sep 17 00:00:00 2001 From: cvachha Date: Mon, 1 Jan 2024 04:24:09 -0800 Subject: [PATCH] research edits --- dreamcrafter_progress.md | 2 +- research.md | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/dreamcrafter_progress.md b/dreamcrafter_progress.md index c142262..04504ce 100644 --- a/dreamcrafter_progress.md +++ b/dreamcrafter_progress.md @@ -10,7 +10,7 @@ show_tile: false ## Overview -For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my VR/AR class in my first semester (Fall 2023) of my 5th year masters program (1 yr program), to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing. +For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my first semester (Fall 2023) of my 5th year masters program (1 yr program), two prototype systems were implemented which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation as part of a group I led for our final project in our VR/AR course. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing. Integration of NeRF/3DGS and diffusion models into user-friendly platforms for 3D environment creation is still in its infancy. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations. diff --git a/research.md b/research.md index 3ba4849..c29c15d 100644 --- a/research.md +++ b/research.md @@ -33,7 +33,7 @@ menu-show: true

Instruct-GS2GS

-

Authors: **Cyrus Vachha** and Ayaan Haque (2023)

+

Authors: Cyrus Vachha and Ayaan Haque (2023)

We propose a method for editing 3D Gaussian Splatting (3DGS) scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
- Paper coming soon @@ -59,7 +59,7 @@ menu-show: true

Nerfstudio Blender VFX Add-on

-

Authors: **Cyrus Vachha** (2023)

+

Author: Cyrus Vachha (2023)

We present a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs. Our approach involves using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. Our NeRF Blender add-on allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production.
- Shown in CVPR 2023 Art Gallery @@ -86,7 +86,7 @@ menu-show: true

StreamFunnel

-

Authors: Haohua Lyu, **Cyrus Vachha**, Qianyi Chen, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2023)

+

Authors: Haohua Lyu, Cyrus Vachha, Qianyi Chen, Balasaravanan Thoravi Kumaravel, Bjöern Hartmann (2023)

The increasing adoption of Virtual Reality (VR) systems in different domains have led to a need to support interaction between many spectators and a VR user. This is common in game streaming, live performances, and webinars. Prior CSCW systems for VR environments are limited to small groups of users. In this work, we identify problems associated with interaction carried out with large groups of users. To address this, we introduce an additional user role: the co-host. They mediate communications between the VR user and many spectators. To facilitate this mediation, we present StreamFunnel, which allows the co-host to be part of the VR application's space and interact with it. The design of StreamFunnel was informed by formative interviews with six experts. StreamFunnel uses a cloud-based streaming solution to enable remote co-host and many spectators to view and interact through standard web browsers, without requiring any custom software. We present results of informal user testing which provides insights into StreamFunnel's ability to facilitate these scalable interactions. Our participants, who took the role of a co-host, found that StreamFunnel enables them to add value in presenting the VR experience to the spectators and relaying useful information from the live chat to the VR user.