This is absolutely a crucial and salient point; call me an optimist, but I'm encouraged by the fact that it seems as technological advancement has progressed throughout human history, the speed of responses has accelerated in conjunction with shortened timelines - steam and mechanization, electricity and mass production, telecommunications and media, digital information, and now artificial intelligence have respectively seen faster response times compared to each previous revolution.
I think short-term suffering, or at the very least disruption (as we're seeing) is essentially inevitable, but with all of these preemptive frameworks being implemented, or at the very least discussed (though just the latter isn't really good enough at all, of course) in turnaround times that are unprecedented, I really do not foresee a techno-dystopia; however, again, perhaps that's just wishful thinking.
Quite honestly, I think a pragmatic place to start, outside of theology and moral philosophy, is to make AI development necessarily adherent to some consortium of standards outlined by governments and implemented by boards within industries - like what we see with many engineering professions in the US and other countries.
No, because it’s built on the false premise that the inequality following the industrial revolution ever stopped.[1]
It’s easy for us to be optimists and shrug nonchalantly about the short-term (?) suffering when we don’t face the worst or even median pain that these changes bring. Very strong “you will suffer but that’s a sacrifice I’m willing to make” vibes.
I think short-term suffering, or at the very least disruption (as we're seeing) is essentially inevitable, but with all of these preemptive frameworks being implemented, or at the very least discussed (though just the latter isn't really good enough at all, of course) in turnaround times that are unprecedented, I really do not foresee a techno-dystopia; however, again, perhaps that's just wishful thinking.
Quite honestly, I think a pragmatic place to start, outside of theology and moral philosophy, is to make AI development necessarily adherent to some consortium of standards outlined by governments and implemented by boards within industries - like what we see with many engineering professions in the US and other countries.