It was suggested to me by a friend that to get good OCR results, run it through the scanner/OCR twice, then diff the results. Usually one or the other will get it right, and if you run the two results through a difference editor like 'meld', it's quick to fix.
That may work for some cases, and especially with horrible OCR engines and low quality scanners, but frankly when I did my research into this, the results varied extremely little from run to run, and you could usually easily identify specific artefacts in the source that tripped the engine up (rather than problems with the quality of the scan). E.g. letters that were damaged, or had run together, creases in the paper etc.
With really low res scanners I can image it could make a big difference.
Back in the late 90's I worked for a company that did a lot of OCRing and they ran the same image through multiple engines and then manually corrected the results. I think they had 3 engines, all from different companies, which processed all images and put the results into a custom format. Human beings were then employed to manually merge and correct the final text. It worked fairly well, especially considering the hardware/software available at the time.
The biggest problem was stuffing too many files into an NTFS directory. Apparently, NTFS didn't like tens of thousands of files in one directory. :)