Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> brilliantly realised

Can you say more about this? Nothing about this approach seems very amazing to me. Construct an approximate solution by some numerical method (in this case neural networks), prove that a solution which is close enough to satisfying the equation can be perturbed to an exact solution. Does the second half use some nonstandard method?



This is one of those things that seems easy in retrospect, but wasn't particularly obvious at the time.

1. Proving existence to a differential equation using a numerical approximation is quite a bit more difficult than it seems at first. For Euler or NS, it seems almost absurd. Not only do you need a rather substantial amount of control over the linearisation, you need a way to rigorously control a posteriori error. This is easy for polynomials, but doing it for other models requires serious techniques that have only been created recently (rigorous quadrature via interval arithmetic).

2. Further to that, neural networks are far from an obvious choice. They are not exactly integrable and it is certainly not clear a priori that their biases would help to search for a blowup solution. In fact, I would have initially said it was a poor choice for the task. You also need to get it to a certain degree of precision and that's not easy.

3. The parameterisation of the self-similar solution turns out to be completely appropriate for a neural network. To be fair, solutions of this type have been considered before, so I'm willing to chalk this down to luck.

It's difficult to explain how challenging it is to fully derive a plan for a computationally-assisted proof of this magnitude unless you've tried it yourself on a new problem. At the end it seems completely obvious, but only after exhausting countless dead ends first.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: