From b8a80f5f4fdc9f72140ac8547226c424ebb980f8 Mon Sep 17 00:00:00 2001 From: cvachha Date: Tue, 2 Jan 2024 01:22:16 -0800 Subject: [PATCH] Update dreamcrafter_progress.md --- dreamcrafter_progress.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/dreamcrafter_progress.md b/dreamcrafter_progress.md index d30c1cf..b9bfddc 100644 --- a/dreamcrafter_progress.md +++ b/dreamcrafter_progress.md @@ -10,11 +10,11 @@ show_tile: false ## Overview -For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In my first semester (Fall 2023) of my 5th year masters program (1 yr program), two prototype systems were implemented which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation as part of a group I led for our final project in our VR/AR course. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system assisted by generative AI. Our system attempts to address the gap in HCI-VR research in generative AI tools by proposing a system that enhances the process of 3D environment creation and editing. +For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). In the first semester (Fall 2023) of my masters program, I led a group in a VR/AR course that implemented a system prototype which leveraged NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. We call our system Dreamcrafter, a Virtual Reality 3D content generation and editing system assisted by generative AI. -Integration of NeRF/3DGS and diffusion models into user-friendly platforms for 3D environment creation is still in its infancy. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations. +Integration of NeRF/3DGS and diffusion models into user-friendly platforms for 3D environment creation is still in its infancy. Editing of radiance fields and generative 3D objects is currently limited to text prompts or limited 2D interfaces. Current research in NeRFs and diffusion models is primarily focussed on enhancing image/reconstruction quality, and we aim to address the noticeable lack of exploration in the application of user interfaces designed for editing and controllability of these models and novel 3D representations. -The core of our approach is a VR-based system that allows users to interact with and manipulate 3D objects and environments in real-time. Dreamcrafter involves two subsystems which leverage novel 3D representations and stable diffusion. The stable diffusion powered system assigns semantically mapped spatial tags to 3D primitive objects to generate stable diffusion previews of scenes. Our second subsystem leverages NeRFs and 3D Gaussian Splatting for rendering and editing of 3D photo realistic scenes. Dreamcrafter is designed to be simple to use, lowering the barrier to entry for users without extensive experience in 3D modeling, while still providing realistic output results. +The core of our approach is a VR-based system that allows users to interact with and manipulate 3D objects and environments in real-time to enhance the process of 3D creation and editing. Dreamcrafter involves two subsystems which leverage novel 3D representations and stable diffusion. The stable diffusion powered system assigns semantically mapped spatial tags to 3D primitive objects to generate stable diffusion previews of scenes. Our second subsystem leverages NeRFs and 3D Gaussian Splatting for rendering and editing of 3D photo realistic scenes. Dreamcrafter is designed to be simple to use, lowering the barrier to entry for users without extensive experience in 3D modeling, while still providing realistic output results. Our current system is a proof-of-concept prototype which we developed within a month.