Preprint

Predictive Photometric Uncertainty in
Gaussian Splatting for Novel View Synthesis

Chamuditha Jayanga Galappaththige1,2*, Thomas Gottwald3*, Peter Stehr3*, Edgar Heinert4, Niko Suenderhauf1,2, Dimity Miller1,2, Matthias Rottmann4
1 QUT Centre for Robotics, Australia    2 ARIAM Hub, Australia    3 University of Wuppertal, Germany    4 University of Osnabrück, Germany
* Equally contributed

Abstract

Recent advances in 3D Gaussian Splatting have enabled impressive photorealistic novel view synthesis. However, to transition from a pure rendering engine to a reliable spatial map for autonomous agents and safety-critical applications, knowing where the representation is uncertain is as important as the rendering fidelity itself.

We bridge this critical gap by introducing a lightweight, plug-and-play framework for pixel-wise, view-dependent predictive uncertainty estimation. Our post-hoc method formulates uncertainty as a Bayesian-regularized linear least-squares optimization over reconstruction residuals. This architecture-agnostic approach extracts a per-primitive uncertainty channel without modifying the underlying scene representation or degrading baseline visual fidelity.

Crucially, we demonstrate that providing this actionable reliability signal successfully translates 3D Gaussian Splatting into a trustworthy spatial map, further improving state-of-the-art performance across three critical downstream perception tasks: active view selection, pose-agnostic scene change detection, and pose-agnostic anomaly detection.

Per-pixel uncertainty maps for novel view synthesis in 3DGS
Per-pixel Uncertainty Estimation for Novel View Synthesis. Our post-hoc method generates view-dependent uncertainty maps that closely mirror regions of error in the RGB render. Bright regions indicate high predicted uncertainty, which accurately aligns with true rendering errors across diverse 3DGS scenes.

Highlights

Plug-and-Play

Architecture-agnostic — integrates with any 3DGS variant without modifying the base model or its training pipeline.

Minimal Overhead

Only ~13% additional training time versus vanilla 3DGS, far less than competing approaches that require >100% overhead.

State-of-the-Art UE

3× higher Pearson correlation and <½ the AUSE (DSSIM) compared to the best existing method on Mip-NeRF360.

Downstream Impact

Consistent improvements across active view selection, scene change detection, and anomaly detection.

Method

Our approach learns a per-primitive, view-dependent uncertainty channel directly from training-view reconstruction residuals via a lightweight linear least-squares formulation.

U-3DGS method overview

Method Overview

1

Train 3DGS

A standard 3DGS model is trained to completion. Our method requires no changes to this step.

2

Compute Residuals

Per-pixel reconstruction residuals $L_x = (1-\lambda)L^1_x + \lambda L^{\text{DSSIM}}_x$ are computed on all training views.

3

Least-Squares Optimization

A per-primitive uncertainty channel $u_k$ is learned by minimizing $\| y - Au \|_2^2$ with Bayesian $L^2$ regularization.

4

Render Uncertainty

Uncertainty maps are rendered for any novel viewpoint: $U(x) = \sum_k u_k(d)\,\alpha_k(x)\,T_k(x)$.

  Bayesian-Inspired Regularization

In sparse-view settings, 3DGS can overfit training views, yielding near-zero residuals even in poorly reconstructed regions. We address this by introducing an $L^2$ regularization term that acts as a Gaussian prior — pulling unobserved primitive uncertainties toward a maximal uncertainty level $b$:

$$L' = \sum_x (L_x - U_x)^2 \;+\; \lambda_{\text{reg}} \sum_{k=1}^K \int_{S^2} (b - u_k(r))^2 \,\mathrm{d}r$$

This is equivalent to Bayesian inference with a Gaussian prior centered at maximal uncertainty, ensuring reliable predictions even for highly novel viewpoints.

Results

Qualitative Uncertainty Estimation

Our uncertainty maps closely follow the true per-pixel rendering error, outperforming all baselines across Mip-NeRF360, Tanks & Temples, and Deep Blending datasets.

Qualitative comparison of predicted uncertainty maps
Qualitative comparison. Our uncertainty maps align much more closely with the ground-truth error than FisherRF, Manifold, and Var3DGS baselines. Brighter regions = higher uncertainty.

Quantitative Results — Novel View Synthesis UE

We consistently outperform all baselines across all metrics and datasets with significantly lower computational overhead.

Method Mip-NeRF360 Tanks & Temples Deep Blending
AUSE ↓ Pearson ↑ OH ↓ (%) AUSE ↓ Pearson ↑ OH ↓ (%) AUSE ↓ Pearson ↑ OH ↓ (%)
DSSIMDSSIM DSSIMDSSIM DSSIMDSSIM
FisherRF 0.7080.606-0.0550.00914.2 0.6910.709-0.087-0.14519.3 0.7510.853-0.116-0.19019.7
Manifold 0.5200.5590.070-0.00530.2 0.5740.6540.0530.00823.6 0.5030.5480.0950.07448.1
Var3DGS 0.5580.4950.1180.160>100 0.5390.5670.1610.176>100 0.5880.6710.1060.006>100
Ours 0.3280.214 0.3690.54712.8 0.2990.233 0.4270.57114.0 0.3760.356 0.2430.24413.0

OH = training overhead as percentage of 3DGS training time. Our method achieves >3× higher Pearson correlation and <½ AUSE (DSSIM) compared to the best baseline (Var3DGS) on Mip-NeRF360, while maintaining the lowest overhead.

UE on Highly Novel Views

Our Bayesian-inspired regularization ensures reliable uncertainty estimates even when the base 3DGS model is trained on only 4 sparse views, outperforming FisherRF consistently across all regularization weights.

Train/test split visualization for sparse-view experiment
Training (red) vs. test (blue) camera distribution in the sparse-view setting.
Sparse UE results L1
AUSE (L¹) vs. regularization weight $\lambda_{\text{reg}}$
Sparse UE results DSSIM
AUSE (DSSIM) vs. regularization weight $\lambda_{\text{reg}}$

Qualitative Effect of Regularization

Ground truth

Ground Truth

3DGS Render

3DGS Render

No regularization

$\lambda_{\text{reg}}=0$

Reg = 0.32

$\lambda_{\text{reg}}=0.32$

Reg = 10.24

$\lambda_{\text{reg}}=10.24$

Without regularization ($\lambda_{\text{reg}}=0$), the model predicts near-zero uncertainty even for poorly reconstructed regions. Bayesian regularization correctly highlights uncertain areas.

Downstream Task 1: Active View Selection

We use our predicted uncertainty maps to guide next-best-view selection, selecting the candidate view with the highest total predicted uncertainty. Our approach consistently outperforms state-of-the-art baselines on Mip-NeRF360.

Active view selection over iterations
Reconstruction quality (PSNR) over view selection iterations. Our method consistently achieves higher quality with fewer views.
MethodPSNR ↑SSIM ↑LPIPS ↓
FisherRF20.2660.5930.363
Manifold19.7320.5950.373
Manifold†20.0880.6110.350
Ours 20.676 0.615 0.344

Active View Selection on Mip-NeRF360 (20 selected views). † denotes using Manifold's predicted view order to guide vanilla 3DGS training.

Downstream Task 2: Pose-Agnostic Scene Change Detection

Rendering artifacts in 3DGS-based scene change detection cause false positives. We suppress these by attenuating the change map with our predicted uncertainty: $\tilde{M}^k = M^k \odot (1 - M^k_{\text{unc}})$.

Uncertainty-guided scene change detection qualitative results
Qualitative scene change detection. Our uncertainty maps identify rendering artifacts, allowing the system to suppress false positives caused by 3DGS rendering limitations rather than true scene changes.
Metric Feature Diff. MV3DCD-ZS MV3DCD Online-SCD
Base+ OursΔ% Base+ OursΔ% Base+ OursΔ% Base+ OursΔ%
mIoU ↑ 0.2780.359+29.1% 0.3820.439+14.9% 0.4700.498+6.0% 0.4860.498+2.5%
F1 ↑ 0.4020.502+24.9% 0.5260.593+12.7% 0.6210.649+4.5% 0.6380.651+2.1%

Results on the PASLCD benchmark. Our uncertainty guidance consistently improves all baselines.

Downstream Task 3: Pose-Agnostic Anomaly Detection

We apply the same uncertainty-guided attenuation to pose-agnostic anomaly detection: $\tilde{S}^k = S^k \odot (1 - M^k_{\text{unc}})$. This suppresses false anomaly activations in regions where the 3DGS reference model lacks confidence.

Metric SplatPose SplatPose+
Base+ OursΔ% Base+ OursΔ%
AUROC ↑ 0.9290.939+1.1% 0.9400.956+1.7%
AUPRO ↑ 0.7000.765+9.3% 0.7610.798+4.9%

Results on MAD-Real benchmark. With our UE, SplatPose matches or surpasses its successor SplatPose+, and our method advances the current state-of-the-art when applied to SplatPose+.

BibTeX

@inproceedings{u3dgs2026,
  title     = {Predictive Photometric Uncertainty in {G}aussian
               {S}platting for Novel View Synthesis},
  author    = {Galappaththige, Chamuditha Jayanga and Gottwald, Thomas and
               Stehr, Peter and Heinert, Edgar and Suenderhauf, Niko and
               Miller, Dimity and Rottmann, Matthias},
  booktitle = {arXiv Preprint},
  year      = {2026},
}