> 10 years ago, the entire conversation would have seemed ridiculous
Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.
It is a bit disquieting though that these predictions instead of being pushed farther away are converging to a time even closer than originally imagined. Some breakthroughs and doomsday scenarios are constantly placed thirty years into the future; this seems to be actually getting closer earlier than imagined.
I see them as funhouse mirrors, the kind that reflect your image to make you skinny or fat, except they do it with semantics, big deal. I've never had an interaction with an llm that wasnt just repeating what I said more verbosely, or with compressed fuzzy facts sprinkled in.
There is no machine spirit that exists in a box separately from us, it's just a means for people to amplify and multiply their voice into ten thousand sock puppet bot accounts, that's all I'm able to grasp anyway. Curious to hear your experience that's led you to believe something different.
Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.
[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...