In general, most of the benefits of randomness are simply the benefits of avoiding a particular low volume failure mode.
Stochastic resonance, for example, is a terrible example of the benefits of randomness. You could do a LOT better with non-random perturbations. Stochastic resonance means random perturbations + law of large numbers allow you get a less quantized version of a signal. But if you have controlled (non-random perturbations) you can get the same result in O(N) samples rather than O(N^2).
Or similarly, consider "computer simulations". You'll nearly always get better results if you use either fixed quadratures (in low dimensions) or use quasi monte carlo (in high dimensions). Some months back I drastically sped up our computer simulations by switching from MC to Simpson's Rule in 1-d simulations.
I can think of very few circumstances where randomized algorithms are anything other than a crude hack to get around forming a real understanding of the problem.
> I can think of very few circumstances where randomized algorithms are anything other than a crude hack to get around forming a real understanding of the problem.
Randomization has a huge benefit where the input is adversarial. Think about crypto (primality testing, proofs verifiable with randomized algorithms) and even O(1) data structures like hash tables that can be made more resistant to malicious input by randomizing them.
From what I can tell, this is the sole benefit - in principle - of a random number generator. If you assume an adversary who can predict a deterministic world better than you can, then you can remove most of that adversary's advantage by making the world unpredictable.
But that's a very far cry from what this article is pushing. Randomness in this case is not making any system work better as the article discusses early on (and incorrectly hints at in some of the examples). Randomness is just a blind woman shutting off her lights in order to hinder a sighted man trying to harm her.
This is getting rather off-topic from the article, but I think you're making a stronger and more interesting claim than you realize. If randomness doesn't help algorithms in general, then BPP (the class of languages decidable in polynomial time by randomized algorithms with bounded error, more or less) is in P or some other deterministic class, depending on exactly how you define randomness not helping. There's all kinds of interesting research here.
As a concrete example, primality testing has been known to be very efficiently solvable with a randomized algorithm for a long time, but it was only recently that a deterministic polynomial-time algorithm was found. That algorithm is considerably more complicated than the randomized algorithms.
See http://www.math.ias.edu/~avi/BOOKS/rand.pdf for some more detail. I think there's a lot more research on the topic that I haven't managed to dig up in three minutes of searching.
Edit: another use of randomness: expander graphs are quite useful in a number of contexts. There are easy random constructions that provably produce expanders with high probability, but I'm not sure whether anyone has found a deterministic contruction that produces an expander.
A monte carlo simulation is fundamentally a way to compute an operation over a probability density - most commonly computing integral f(x) p(x) dx.
For example, suppose you have q(x) = C p(x), with p(x) the true probability density (classical setup for MCMC).
If x \in [0,1], then the easiest way to compute p(x) is:
x = arange(0,1, 1/1024.0)
px = q(x)
px /= sum(px)
Then if you wanted to compute integral f(x) p(x) dx, you'd just do:
sum(f(x)*px)
In short, my claim is that computer simulations are typically ways of computing integrals. Monte carlo is just a very generic brute force method of doing that. But if you get clever and truly understand your specific problem, you can nearly always come up with a non-generic and non-random way of solving things that's probably a lot faster.
The discovery of the usefulness of dithering is interesting too:
"One of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither..."
Stochastic resonance, for example, is a terrible example of the benefits of randomness. You could do a LOT better with non-random perturbations. Stochastic resonance means random perturbations + law of large numbers allow you get a less quantized version of a signal. But if you have controlled (non-random perturbations) you can get the same result in O(N) samples rather than O(N^2).
Or similarly, consider "computer simulations". You'll nearly always get better results if you use either fixed quadratures (in low dimensions) or use quasi monte carlo (in high dimensions). Some months back I drastically sped up our computer simulations by switching from MC to Simpson's Rule in 1-d simulations.
I can think of very few circumstances where randomized algorithms are anything other than a crude hack to get around forming a real understanding of the problem.