Hacker News new | past | comments | ask | show | jobs | submit login

AI is essentially search. Quantum computers are really good at search.



Unstructured search is only a √n improvement. You need to find some way for algorithmically significant interference/cancellation of terms in order for a qc to potentially (!) have any benefit.


Search of what?


Anything. Everything. In domains where the search space is small enough to physically enumerate and store or evaluate every option, search is commonly understood as a process solved by simple algorithms. In domains where the search space is too large to physically realize or index, search becomes "intelligence."

E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.

E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.

With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.

But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm


Grover's algorithm is useful for very few things in practice, because for most problems we have a better technique than checking sqrt(N) of all possible solutions, at least heuristicly.

There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.


O(sqrt(N)) is easily dominated by the relative ease of constructing much bigger classical computers though.


Uh, no? Not for large N.

There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.

But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.


Doing 2^76 iterations is huge. That's a trillion operations a second for two and a half thousand years if I've not slipped up and missed a power of ten.


Maybe 100 years from now we can do 2^18 quantum ops/sec and solve chess in a day, whereas a classical computer could do 2^36 ops/sec and still take longer than the lifetime of the universe to complete.


Google's SHA-1 collision took 2^63.1 hash operations to find. Given that a single hash operation takes more than 1000 cycles, that's only less than three doublings away.

Cryptographers worry about big numbers. 2^80 is not considered secure.


It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?

We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.


If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.

This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.

This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).

In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.


That only helps for a relative small range of N. Chess happens to sort of fit into this space. Go is way out, even a sqrt(N) is still in the "galaxy-sized computer" range. So again, there are few problems for which Grover's algorithms really takes us from practically uncomputable to computable.

Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.


No-one would solve chess by checking every possible legal chess state -- also checking 'all the states' wouldn't solve chess, you need a sequence of moves, and that pushes you up to an even bigger number. But again, you can easily massively prune that, as many moves are forced, or you can check you are in a provable end-game situation.



training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: