>Unlike existing approaches to 3D Gaussian Splatting, we propose to interpret the training process
of placing and optimizing Gaussians as a sampling process. Rather than defining a loss function and simply taking steps towards a local minimum, we define a distribution G which assigns high
probability to collections of Gaussians which faithfully reconstruct the training images.
What is the practical difference here? MCMC itself samples more from higher probilities than lower ones (ie. towards a local minimum). Is it just that we sample more from lower ends of the distribution? Or is it more about formalizing the previous algorithm so that it is easier to play with the different parameters? (eg. the acceptance threshold)
What is the practical difference here? MCMC itself samples more from higher probilities than lower ones (ie. towards a local minimum). Is it just that we sample more from lower ends of the distribution? Or is it more about formalizing the previous algorithm so that it is easier to play with the different parameters? (eg. the acceptance threshold)