You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m trying to integrate Instruct-Pix2Pix edits into a Gaussian Splatting (Splatfacto) training pipeline. Instruct-Pix2Pix itself clearly works: the exported images are correctly transformed according to the prompts (e.g., “turn completely black”). However, these edited images never seem to be used by the Splatfacto training. The logs indicate that Splatfacto is always training on the original, unedited images, no matter what changes I make.
Evidence
Debug prints show that the ground-truth images used in the loss consistently have min=0.0 and max=1.0 (typical of the unaltered images), rather than the black or heavily altered images from Instruct-Pix2Pix.
Even after many steps with a high learning rate and confirmed Instruct-Pix2Pix edits, there’s no visible change in the Gaussian Splatting render.
What I’ve tried
Confirmed that my code writes the edited images to self.datamanager.cached_train[idx]["image"] (and sometimes also data["image"]).
Made sure that the L1/LPIPS code paths in InstructGS2GSModel are no longer commented out and are being called.
Removed or reduced the usage of deepcopy() and tested both CPU and GPU caching (images_on_gpu, cache_images='cpu' vs. 'gpu').
Forced training on a single image index to see if the update “sticks”.
Tried resetting optimizers, re-initializing learning rates, and using different steps.
Inserted debug prints everywhere (in the pipeline, datamanager, and model) to confirm that the edited images never actually show up in the loss function.
Despite these attempts, the model’s loss function still sees only the unedited frames. If anyone has suggestions for how to ensure that the Instruct-Pix2Pix–generated images actually become the ground-truth for the Gaussian Splatting training, please let me know. I’ve spent days on this and would appreciate any insights.
The text was updated successfully, but these errors were encountered:
I’m trying to integrate Instruct-Pix2Pix edits into a Gaussian Splatting (Splatfacto) training pipeline. Instruct-Pix2Pix itself clearly works: the exported images are correctly transformed according to the prompts (e.g., “turn completely black”). However, these edited images never seem to be used by the Splatfacto training. The logs indicate that Splatfacto is always training on the original, unedited images, no matter what changes I make.
Evidence
What I’ve tried
Despite these attempts, the model’s loss function still sees only the unedited frames. If anyone has suggestions for how to ensure that the Instruct-Pix2Pix–generated images actually become the ground-truth for the Gaussian Splatting training, please let me know. I’ve spent days on this and would appreciate any insights.
The text was updated successfully, but these errors were encountered: