diff --git a/.DS_Store b/.DS_Store index e7f0b1a..09ea642 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/assets/images/igs2gs_face.gif b/assets/images/igs2gs_face.gif index 9d0568c..1b60800 100644 Binary files a/assets/images/igs2gs_face.gif and b/assets/images/igs2gs_face.gif differ diff --git a/assets/images/nerfstudio_logo.gif b/assets/images/nerfstudio_logo.gif new file mode 100644 index 0000000..5644841 Binary files /dev/null and b/assets/images/nerfstudio_logo.gif differ diff --git a/dreamcrafter_progress.md b/dreamcrafter_progress.md index e63e9bd..7efaae1 100644 --- a/dreamcrafter_progress.md +++ b/dreamcrafter_progress.md @@ -10,6 +10,7 @@ show_tile: false ## Overview +For my masters thesis, I am trying to create a system based on the proposal I made in 2022 [here](nerfenvironmentcreation.html). We propose Dreamcrafter, a Virtual Reality 3D content generation and editing system Assisted by generative AI. Our system addresses this gap by proposing a system that harnesses the immersive experience and spatial interactions of VR, coupled with the advanced capabilities of generative AI, to enhance the process of 3D environment creation and editing. NeRF and diffusion models offer unparalleled realism and detail in rendering, however, their integration into user-friendly platforms for 3D environment creation is still in its infancy. diff --git a/experimentationNeural.md b/experimentationNeural.md index 39fb262..4e6c46e 100644 --- a/experimentationNeural.md +++ b/experimentationNeural.md @@ -1,6 +1,6 @@ --- layout: post -title: Experimentation with NeRFs,Neural Rendering, and Virtual Production +title: Experimentation with NeRFs, Neural Rendering, and Virtual Production show_tile: false --- @@ -12,15 +12,31 @@ show_tile: false
-

VR NeRF Environment Creation System

+

VR NeRF Environment Creation System Proposal (2022)

-

Outline of current research project on creating a NeRF creation system for VR

+

Outline propsal of current research project on creating a NeRF creation system for VR created in 2022

+
+ + + +
+
+
+

Nerfstudio Contributions

+
+

Since Jan 2023, I have been contributing features to the Nerfstudio system including the Blender VFX add-on and VR180/Omnidirectional (VR 360) video/image render outputs.

+ +
+
+
@@ -28,7 +44,7 @@ show_tile: false
-

Virtual Production Experiments

+

Virtual Production Experiments (2020-2021)

A few experiements including virtual real time backgrounds and virtual metahuman actors

    @@ -44,9 +60,11 @@ show_tile: false
    -

    NeRF Gallery

    +

    NeRF Gallery (2022)

    -

    Renders of select Neural Radiance Fields I captured from Luma Labs AI, NVidia Instant NeRF, and NerfStudio

    +

    Renders of select Neural Radiance Fields I captured from Luma Labs AI, NVidia Instant NeRF, and NerfStudio +
    + Will soon be updated with a selection from (hundreds of) my 2023 captures

    @@ -59,7 +77,7 @@ show_tile: false

    Background of Interest

    -

    Blog style post explaining the timeline of my interest and motivation in NeRFs, lightfields, and neural rendering

    +

    Blog style post explaining the timeline of my interest and motivation in NeRFs, lightfields, and neural rendering I wrote in 2022

    diff --git a/nerfstudio_contributions.md b/nerfstudio_contributions.md index 03a90eb..c65c8a5 100644 --- a/nerfstudio_contributions.md +++ b/nerfstudio_contributions.md @@ -15,13 +15,13 @@ Since Jan 2023 I have made contributions to the Nerfstudio API system including -I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages using Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well. +I created a Blender add-on that allows NeRFs to be used in visual effects. This enables a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio. This approach leverages Blender, a widely used open-source 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. It allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene. This approach of generating NeRF aligned camera paths can be adapted to other 3D tool sets and workflows, enabling a more seamless integration of NeRFs into visual effects and film production. This also supports Nerfstudio gaussian splatting as well. The exported mesh or point cloud representation is imported into Blender and a render camera path is generated by transforming the coordinate space of the NeRF scene to the Blender virtual camera, allowing aligned camera paths. -I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic exmaples using the add-on as well as a breakdown of other effects that can be done with it. +I created documentation for it [here](https://docs.nerf.studio/extensions/blender_addon.html) and a tutorial video demonstrating basic examples using the add-on as well as a breakdown of other effects that can be done with it. diff --git a/research.md b/research.md index 85783cf..371ef07 100644 --- a/research.md +++ b/research.md @@ -35,7 +35,7 @@ menu-show: true

    We propose a method for editing 3D Gaussian Splatting (3DGS) scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
    - - Paper comming soon + - Paper coming soon
    - Nerfstudio integration supported

    @@ -131,16 +131,16 @@ menu-show: true

    Dreamcrafter (In Progress)

    -

    In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently devloping the complete system with intelligent natural langue region selection and additional features. We are working towards a research publication for 2024.

    +

    In my current 5th-year masters program (1 year graduate program after 4 year undergrad degree), I am attempting to build the initial concept of the VR environment creation system I proposed in 2022. For my VR/AR class I worked with a team to implement two prototype systems which leverage NeRFs, 3DGS, and Stable Diffusion to create a VR interface for 3D photo-realistic content creation. This includes a system to edit existing NeRF/GS scenes through voice, hand controls, and existing diffusion models (such as Instruct-Pix2Pix). We also have a system leveraging ControlNet to create 2D mockups of scenes based on 3D primitive objects. I am currently developing the complete system with intelligent natural language region selection and additional features. We are working towards a research publication for 2024.

    +
-->
- +