Hacker News new | past | comments | ask | show | jobs | submit login

It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?

We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.




If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.

This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.

This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).

In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: