> And yeah you can forgive all this by quibbling that git was written in a time when internet access was not ubiquitous
I don't think that's right. Git's remote concepts were built heavily on even more ubiquitous internet access than you are assuming, to some extent. Git was built where some upstreams were patches on email mailing lists. Git was built in environments where every contributor could relatively easily stand up a small server of their own (even as just a temporary server on a personal device with specified time windows) and you might have branch remotes tied to different colleagues' servers in a distributed fashion, the D in DVCS.
At the time those weren't advanced features for advanced users, those were simple features for flexible source control. There's a sort of simplicity to pull requests in email flows. There's a sort of simplicity in "hey, can you check out my branch and make notes on it, I'll serve it on my lab machine for a couple of hours so you can grab it, here's the URL." In some of those cases you don't even care to remember that remote URL after you've grabbed the branch because it will be a different IP address and port the next time they bring up that lab machine. (Git was a built in a world where there was no "origin" and multiple repos were valid representations of progress, some of them transient and as-needed, and "origin" was a name and concept that came later.)
Some of that only exists in a world that assumes internet connectivity is ubiquitous, not just access, but service hosting and upload capabilities. The internet has some strange centralizing forces making access easier but anything other than raw consumption harder.
There are certainly a lot of good reasons for some of that centralization. Whether or not is "simpler", there's a convenience on everyone sharing big centralized hosts. There's a lot of convenience of "there is mostly only one remote that everyone shares and it has a high uptime SLA and a ton of extra collaboration features in one place". There were certainly a lot of centralized version control systems before the DVCS was invented, and beyond convenience also a lot of familiarity that such centralizing operations benefited from.
It's interesting to me that in your last paragraph you think the solution is to make git a more "proper" distributed system, but one of the features you find too complex and don't like exists so much because it was defined and built as a distributed system and just isn't as convenient when working with centralized providers. git repos support multiple remotes because it was built to be distributed, git repos require to fetch remotes explicitly because it was built to be fault-tolerant in a distributed system and remotes may have very different SLAs from each other; losing access to one remote host shouldn't stop you from fetching updates from a different one. The DX there was built for a distributed system. It is mostly where we see everything revolving around some super special "origin" remote that the DX feels overly-complicated and maybe missing better "defaults". It is mostly on the internet where running a simple CLI command to spin up a quick code server on a random port on a random machine with an accessible IP address is increasingly hard that it also becomes harder to imagine why people ever needed remotes beyond that special sauce "origin" idea.
I don't think that's right. Git's remote concepts were built heavily on even more ubiquitous internet access than you are assuming, to some extent. Git was built where some upstreams were patches on email mailing lists. Git was built in environments where every contributor could relatively easily stand up a small server of their own (even as just a temporary server on a personal device with specified time windows) and you might have branch remotes tied to different colleagues' servers in a distributed fashion, the D in DVCS.
At the time those weren't advanced features for advanced users, those were simple features for flexible source control. There's a sort of simplicity to pull requests in email flows. There's a sort of simplicity in "hey, can you check out my branch and make notes on it, I'll serve it on my lab machine for a couple of hours so you can grab it, here's the URL." In some of those cases you don't even care to remember that remote URL after you've grabbed the branch because it will be a different IP address and port the next time they bring up that lab machine. (Git was a built in a world where there was no "origin" and multiple repos were valid representations of progress, some of them transient and as-needed, and "origin" was a name and concept that came later.)
Some of that only exists in a world that assumes internet connectivity is ubiquitous, not just access, but service hosting and upload capabilities. The internet has some strange centralizing forces making access easier but anything other than raw consumption harder.
There are certainly a lot of good reasons for some of that centralization. Whether or not is "simpler", there's a convenience on everyone sharing big centralized hosts. There's a lot of convenience of "there is mostly only one remote that everyone shares and it has a high uptime SLA and a ton of extra collaboration features in one place". There were certainly a lot of centralized version control systems before the DVCS was invented, and beyond convenience also a lot of familiarity that such centralizing operations benefited from.
It's interesting to me that in your last paragraph you think the solution is to make git a more "proper" distributed system, but one of the features you find too complex and don't like exists so much because it was defined and built as a distributed system and just isn't as convenient when working with centralized providers. git repos support multiple remotes because it was built to be distributed, git repos require to fetch remotes explicitly because it was built to be fault-tolerant in a distributed system and remotes may have very different SLAs from each other; losing access to one remote host shouldn't stop you from fetching updates from a different one. The DX there was built for a distributed system. It is mostly where we see everything revolving around some super special "origin" remote that the DX feels overly-complicated and maybe missing better "defaults". It is mostly on the internet where running a simple CLI command to spin up a quick code server on a random port on a random machine with an accessible IP address is increasingly hard that it also becomes harder to imagine why people ever needed remotes beyond that special sauce "origin" idea.