I was thinking more of a federated "webring" structure, with some content being present in more than one node, and where maintenance and curation are distributed (and gathered independently) among nodes.
The nation of, say, Japan, has limited interest in funding an american noprofit today; but they would likely have a great deal of interest in funding an equivalent focused on Japanese content, for example.
Ah so more like mastadon or ipfs, but specifically for the purposes of federated archiving.
So now you get into the issue of haves and have nots. Who is allowed to be considered an authorized archivist from a robots.txt perspective? Or what happens if an archivist becomes blacklisted for not respectfully crawling? How do national sanctions affect the Internet Archive of Russia? I imagine there would be a certification process and it would probably cost some money.
It's an interesting topic and I'm simply looking at the weak spots. I'm not against the overall concept though.
All legitimate questions, but if we only built perfect systems we would never have had TCP, let alone the pile of hacks we're now using to discuss this topic.
Distributed governance on the internet is a massive issue, and it's effectively unsolved for everything from pairing to DNS. In practice, good faith goes a long way, particularly in areas that are largely academic in scope - like archiving.
The nation of, say, Japan, has limited interest in funding an american noprofit today; but they would likely have a great deal of interest in funding an equivalent focused on Japanese content, for example.