Gaussian Splatting has become a popular technique for various 3D Computer Vision tasks, including novel view synthesis, scene reconstruction, and dynamic scene rendering. However, the challenge of natural-looking object insertion, where the object's appearance seamlessly matches the scene, remains unsolved. In this work, we propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into a 3DGS scene while correcting its lighting, shadows, and other visual artifacts to ensure consistency. We reveal a hidden ability of diffusion models trained on large real-world datasets to implicitly understand correct scene lighting, and leverage it in our pipeline. After inserting the object, we optimize a diffusion-based Delta Denoising Score (DDS)-inspired objective to adjust its 3D Gaussian parameters for proper lighting correction. We introduce a novel diffusion personalization technique that preserves object geometry and texture across diverse lighting conditions, and utilize it to achieve consistent identity matching between original and inserted objects. Finally, we demonstrate the effectiveness of the method by comparing it to existing approaches, achieving 2.0 dB PSNR improvements in relighting quality.
We compare our method (D3DR) against three baselines across synthetic and real-world scenes: CopyPaste (direct background substitution), iGS2GS (Gaussian splatting with diffusion-based editing), and Latent Bridge (diffusion bridge matching in latent space). Our approach produces significantly more consistent relighting, preserving scene-specific shading, shadows, and material appearance under diverse illumination conditions.
🔍 Click and hold any video to zoom in.
@article{skorokhodov2025d3dr,
title={D3DR: Lighting-aware object insertion in Gaussian splatting},
author={Skorokhodov, Vsevolod and Durasov, Nikita and Fua, Pascal},
journal={arXiv preprint arXiv:2503.06740},
year={2025}
}