Hacker News new | past | comments | ask | show | jobs | submit login

No the main difference is looking at the problem from a new perspective, relating it to a large body of existing work in statistics (see https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo and https://en.wikipedia.org/wiki/Stochastic_gradient_Langevin_d...). Then they are able to use this new perspective to add several improvements that lead to significant quality improvements, clearly establishing the validity and utility of the new theoretical underpinnings. It's likely this will be massively influential in the direction of future research in the space.

The actual changes this led to are:

1) As you mentioned, they added noise. But notably, they state it was "designed carefully" to conform to the requirements of SGLD and they detail how they designed the noise.

2) They simplified the original operations of "move, split, clone, prune, and add" and their related heuristics, into a single operation type. They do so guided by existing knowledge about MCMC frameworks, leading to a simpler model with stronger theoretical underpinnings (huge win!).

3) Adjustments to how gaussians are added and pruned to better fit with the new model. This seems more like housekeeping rather than something novel in and of itself.




Was the noise added to the parameters or to the input images? I assume the parameters, so that they could be fine tuned via the MCMC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: