Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know if this has been discussed somewhere already, but I find the

> "I am finding this whole thing absolutely fascinating, and deeply, darkly amusing."

plus

> "Again, it’s crucial to recognise that this is not an AI having an existential crisis. It’s a language model predicting what should come next in a sequence of tokens... but clearly a language model that has absorbed far too much schlocky science fiction."

somewhat disingenuous. Or maybe not, as long as those AI systems stay as toys.

Because.

It doesn't matter in the end if those are having/may have an existential crisis, if they are even "conscious" or not. It doesn't make those less brittle and dangerous that they're "just a language model predicting...".

What matters is that if similar systems are plugged into other systems, especially sensors and actuators in the physical world, those will trigger actions that will harm things, living or not, on their own call.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: