Isn't this still similar to extortion? Maintainers aren't creating the problem. They are minding their own business until scrapers come along and make too many unnecessary requests. Seems like the burden is clearly on the scrapers. They could easily be hitting the pages much less often for a start.
Doesn't your suggestion shift the responsibility to likely under-sponsored FOSS maintainers rather than companies? Also, how do people agree to switch to some centralized repository and how long does that take? Even if people move over, would that solve the issue? How would a scraper know not to crawl a maintainer's site? Scrapers already ignore robots.txt, so they'd probably still crawl even if you verified you've uploaded the latest content.
Scrapers still have an economic incentive to do what is easiest. Providing an alternative that is easier than fighting sysadmin blocks would likely cause them to take the easier route and make it less of a cat and mouse game for sysadmins.
I was actually thinking of a more general thing than just code, eg similar to CommonCrawl, but maybe a code specific thing is what is needed.