Hacker News new | past | comments | ask | show | jobs | submit login

>Unlike existing approaches to 3D Gaussian Splatting, we propose to interpret the training process of placing and optimizing Gaussians as a sampling process. Rather than defining a loss function and simply taking steps towards a local minimum, we define a distribution G which assigns high probability to collections of Gaussians which faithfully reconstruct the training images.

What is the practical difference here? MCMC itself samples more from higher probilities than lower ones (ie. towards a local minimum). Is it just that we sample more from lower ends of the distribution? Or is it more about formalizing the previous algorithm so that it is easier to play with the different parameters? (eg. the acceptance threshold)




Being able to now apply many tools from the field of statistics to the problem, rather than only being able to use gradient related tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: