Gaussian splatting
Gaussian splatting is a volume rendering technique that deals with the direct rendering of volume data without converting the data into surface or line primitives.[1] The technique was originally introduced as splatting by Lee Westover in the early 1990s.[2]
With advancements in computer graphics, newer methods such as 3D Gaussian splatting and 3D Temporal Gaussian splatting have been developed to offer real-time radiance field rendering and dynamic scene rendering respectively.[3][4]
3D Gaussian splatting
[edit]3D Gaussian splatting is a technique used in the field of real-time radiance field rendering.[3] It enables the creation of high-quality real-time novel-view scenes by combining multiple photos or videos, addressing a significant challenge in the field.
The method represents scenes with 3D Gaussians that retain properties of continuous volumetric radiance fields, integrating sparse points produced during camera calibration. It introduces an Anisotropic representation using 3D Gaussians to model radiance fields, along with an interleaved optimization and density control of the Gaussians. A fast visibility-aware rendering algorithm supporting anisotropic splatting is also proposed, catered to GPU usage.[3]
Method
[edit]The method involves several key steps:
- Input: A set of images of a static scene along with camera positions, expressed as a sparse Point cloud.
- 3D Gaussians: Definition of mean, covariance matrix, and opacity for each Gaussian.
- Color representation: Using Spherical harmonics to model view-dependent appearance.
- Optimization algorithm: Optimizing the parameters using Stochastic gradient descent to minimize a loss function combining L1 loss and D-SSIM, inspired by the Plenoxels work.[5]
- Rasterizer: Implementing a tile-based rasterizer for fast sorting and backward pass, enabling efficient blending of Gaussian components.
The method uses differentiable 3D Gaussian splatting, which is unstructured and explicit, allowing rapid rendering and projection to 2D splats. The covariance of the Gaussians can be thought of as configurations of an ellipsoid, which can be mathematically decomposed into a scaling matrix and a rotation matrix. The gradients for all parameters are derived explicitly to overcome any overhead due to autodiff.
The optimization creates a dense set of 3D Gaussians that represent the scene as accurately as possible. Each step of rendering is followed by a comparison to the training views available in the dataset.
Results and evaluation
[edit]The authors tested their algorithm on 13 real scenes from previously published datasets and the synthetic Blender dataset.[6] They compared their method against state-of-the-art techniques like Mip-NeRF360,[7] InstantNGP,[8] and Plenoxels.[5] Quantitative evaluation metrics used were PSNR, L-PIPS, and SSIM.
Their fully converged model (30,000 iterations) achieves quality on par with or slightly better than Mip-NeRF360,[7] but with significantly reduced training time (35–45 minutes vs. 48 hours) and faster rendering (real-time vs. 10 seconds per frame). At 7,000 iterations (5–10 minutes of training), their method achieves comparable quality to InstantNGP[8] and Plenoxels.[5]
For synthetic bounded scenes (Blender dataset[6]), they achieved state-of-the-art results even with random initialization, starting from 100,000 uniformly random Gaussians.
Limitations
[edit]Some limitations of the method include:
- Elongated artifacts or "splotchy" Gaussians in some areas.
- Occasional popping artifacts due to large Gaussians created by the optimization, especially in regions with view-dependent appearance.
- Higher memory consumption compared to NeRF-based solutions, though still more compact than previous point-based approaches.
- May require hyperparameter tuning (e.g., reducing position learning rate) for very large scenes.
- Peak GPU memory consumption during training can be high (over 20 GB) in the current unoptimized prototype.
The authors note that some of these limitations could potentially be addressed through future improvements like better culling approaches, antialiasing, regularization, and compression techniques.
3D Temporal Gaussian splatting
[edit]Extending 3D Gaussian splatting to dynamic scenes, 3D Temporal Gaussian splatting incorporates a time component, allowing for real-time rendering of dynamic scenes with high resolutions.[4] It represents and renders dynamic scenes by modeling complex motions while maintaining efficiency. The method uses a HexPlane to connect adjacent Gaussians, providing an accurate representation of position and shape deformations. By utilizing only a single set of canonical 3D Gaussians and predictive analytics, it models how they move over different timestamps.[9]
It is sometimes referred to as "4D Gaussian splatting"; however, this naming convention implies the use of 4D Gaussian primitives (parameterized by a 4×4 mean and a 4×4 covariance matrix). Most work in this area still employs 3D Gaussian primitives, applying temporal constraints as an extra parameter of optimization.
Achievements of this technique include real-time rendering on dynamic scenes with high resolutions, while maintaining quality. It showcases potential applications for future developments in film and other media, although there are current limitations regarding the length of motion captured.[9]
Applications
[edit]3D Gaussian splatting has been adapted and extended across various computer vision and graphics applications, from dynamic scene rendering to autonomous driving simulations and 4D content creation:
- Text-to-3D using Gaussian Splatting: Applies 3D Gaussian splatting to text-to-3D generation.[10]
- End-to-end Autonomous Driving: Mentions 3D Gaussian splatting as a data-driven sensor simulation method for autonomous driving, highlighting its ability to generate realistic novel views of a scene.[11]
- SuGaR: Proposes a method to extract precise and fast meshes from 3D Gaussian splatting.[12]
- SplaTAM: Applies 3D Gaussian-based radiance fields to Simultaneous Localization and Mapping (SLAM), leveraging fast rendering and optimization capabilities to achieve state-of-the-art results.[13]
- Align Your Gaussians: Uses dynamic 3D Gaussians for 4D content creation from text.[14]
See also
[edit]References
[edit]- ^ Westover, Lee Alan (July 1991). "SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm" (PDF). Retrieved October 18, 2023.
- ^ Huang, Jian (Spring 2002). "Splatting" (PPT). Retrieved 5 August 2011.
- ^ a b c Bernhard Kerbl; Georgios Kopanas; Thomas Leimkühler; George Drettakis (8 Aug 2023). "3D Gaussian Splatting for Real-Time Radiance Field Rendering". arXiv:2308.04079 [cs.GR].
- ^ a b Guanjun Wu; Taoran Yi; Jiemin Fang; Lingxi Xie; Xiaopeng Zhang; Wei Wei; Wenyu Liu; Qi Tian; Xinggang Wang (12 Oct 2023). "4D Gaussian Splatting for Real-Time Dynamic Scene Rendering". arXiv:2310.08528 [cs.CV].
- ^ a b c Fridovich-Keil, Sara; Yu, Alex; Tancik, Matthew; Chen, Qinhong; Recht, Benjamin; Kanazawa, Angjoo (June 2022). "Plenoxels: Radiance Fields without Neural Networks". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5491–5500. arXiv:2112.05131. doi:10.1109/cvpr52688.2022.00542. ISBN 978-1-6654-6946-3.
- ^ a b Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020), "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", Lecture Notes in Computer Science, Cham: Springer International Publishing, pp. 405–421, doi:10.1007/978-3-030-58452-8_24, ISBN 978-3-030-58451-1, retrieved 2024-09-25
- ^ a b Barron, Jonathan T.; Mildenhall, Ben; Verbin, Dor; Srinivasan, Pratul P.; Hedman, Peter (June 2022). "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5460–5469. arXiv:2111.12077. doi:10.1109/cvpr52688.2022.00539. ISBN 978-1-6654-6946-3.
- ^ a b Müller, Thomas; Evans, Alex; Schied, Christoph; Keller, Alexander (July 2022). "Instant neural graphics primitives with a multiresolution hash encoding". ACM Transactions on Graphics. 41 (4): 1–15. arXiv:2201.05989. doi:10.1145/3528223.3530127. ISSN 0730-0301.
- ^ a b Franzen, Carl. "Actors' worst fears come true? New 3D Temporal Gaussian Splatting method captures human motion". venturebeat.com. VentureBeat. Retrieved October 18, 2023.
- ^ Chen, Zilong; Wang, Feng; Wang, Yikai; Liu, Huaping (2024-06-16). "Text-to-3D using Gaussian Splatting". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vol. abs/2211.0 1324. IEEE. pp. 21401–21412. arXiv:2309.16585. doi:10.1109/cvpr52733.2024.02022. ISBN 979-8-3503-5300-6.
- ^ Chen, Li; Wu, Penghao; Chitta, Kashyap; Jaeger, Bernhard; Geiger, Andreas; Li, Hongyang (2024). "End-to-end Autonomous Driving: Challenges and Frontiers". IEEE Transactions on Pattern Analysis and Machine Intelligence. PP: 1–20. arXiv:2306.16927. doi:10.1109/tpami.2024.3435937. ISSN 0162-8828. PMID 39078757.
- ^ Guédon, Antoine; Lepetit, Vincent (2024-06-16). "SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5354–5363. arXiv:2311.12775. doi:10.1109/cvpr52733.2024.00512. ISBN 979-8-3503-5300-6.
- ^ Keetha, Nikhil; Karhade, Jay; Jatavallabhula, Krishna Murthy; Yang, Gengshan; Scherer, Sebastian; Ramanan, Deva; Luiten, Jonathon (2024-06-16). "SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 21357–21366. doi:10.1109/cvpr52733.2024.02018. ISBN 979-8-3503-5300-6.
- ^ Ling, Huan; Kim, Seung Wook; Torralba, Antonio; Fidler, Sanja; Kreis, Karsten (2024-06-16). "Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8576–8588. arXiv:2312.13763. doi:10.1109/cvpr52733.2024.00819. ISBN 979-8-3503-5300-6.