A twofer, one question and one comment / prediction:
A. Who are some of the thinkers who predicted that many corporations would arguably become more powerful (in terms of control over resources and peoples lives) than most nation states? Are their modern analogues that you (HN reader) recommend for the next phases of history?
B. While many of the well-worn political economy debates about how and when markets work well, fairness, resilience, and so on will continue to matter, I think there will be tremendous rethinking of basic assumptions. The AI progress of ~2017-present has shown that online (at least) it is getting harder to differentiate human from machine intelligence.
So proving human intelligence is more expensive and imperfect. It seems doubtful that most humans want to jump through hoops to prove their humanity. I say this because people want to have machine agents helping them, it seems.
So this machine/human intelligence distinction may erode. Is this a Faustian bargain? I don't know, but I think it depends on the safeguards and designs we choose.
So, machine resources are even more effective in persuading humans than before. In short, as ML/AI gets more {organization, market, marketing} influence, we might see a renaissance of sorts when it comes to ...
1. a more informed public (hard to believe, maybe -- but I said informed not critical nor truth-seeking) with regards to key areas of interest. But along with this probably comes an increased risk of consuming confirmatory information, since such information will be explicitly generated for persuasive purposes.
As such, from a system perspective, humans may be relegated to message propagators rather than agents worthy of fundamental respect. By this I mean the following: most ethicists suggest we value humans as ends (not means). In other words, we want systems that serve people. Engagement ideally would consist of meaningful dialogue and deliberation (which I define as information-rich, critical, civil, thoughtful discussions where people listen and may at times be persuaded).
Unfortunately, AI advances may change a kind of manipulation "arms race" so to speak. It might become more cost-effective to manipulate humans than to gather their input and build consensus thoughtfully and organically. Sadly, I think we've been losing this battle for a long time while. But the underlying forces for manipulation seem to be getting stronger while (a) human nature doesn't seem to be evolving very quickly and (b) general socially learned defenses are inadequate. ("Advertising works even if you know that advertising works.")
And, second ...
2. more pervasive and nuanced market mechanisms and similar (price and quality optimization, matching of people to opportunities). This will likely be good for short-term goals and efficiency, but probably indifferent to long-term stability, not to even mention equity and human rights. Aspects that are not part of the optimization criteria tend to fall by the wayside.
I realize this story probably echoes some themes from general genre of Singularity prognosticators. But all of these changes will have sweeping changes well before we have to concern ourselves with AGI.