That's what the author is saying. Researchers in this field should, for credibility reasons, be solving test problems that can be quickly verified. As to why this isn't done:
(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.
(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.
I would add one more to this, which I would argue is the main reason:
(0) For a quantum algorithm/simulation to be classically verifiable, it needs additional structure; something that leads to a structured, verifiable output despite the intermediate steps being intractable to simulate classically. That additional structure necessarily adds complexity beyond what can be run on current devices.
To pick an arbitrary example I'm familiar with, this paper (https://arxiv.org/abs/2104.00687) relies on the quantum computer implementing a certain cryptographic hash function. This alone makes the computation way more complex than what can be run on current hardware.
Maybe a working quantum algorithm for weather prediction would outperform currently used classical simulations, but I wouldn't expect it to be bang on every time. Inputs are imperfect. So at best you could benchmark it, and gain some confidence over time. It could very well be good enough for weather prediction though.
Also I doubt that a quantum algorithm is possible that provably solves the Navier-Stokes equations with known boundary and initial conditions. At least you need some discretization, and maybe you can get a quantum algorithm that provably converges to the real solution (which alone would be a breakthrough, I believe). Then you need some experimental lab setup with well controlled boundary and initial conditions that you can measure against.
In any case the validation would be at a very different standard compared to verifying prime factorization. At most you can gain confidence in the correctness of the simulation, but never absolute certainty.
At scale, yes. But this would still be solving toy problems with less variables, fewer dimensions.
And they’re not actually solving weather problems right now, I think. That was just an example. What they are actually solving are toy mathematical challenges.
(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.
(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.