Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Saying it’s a autocomplete does not do justice to what amounts to an incredibly complicated neural network with apparent emergent intelligence. More and more it’s seems to be not all that different from how the human brain potentially does language processing.


Just because the output is similar on a surface level doesn't mean this is remotely similar to human language processing. Consider for example, Zwaan & Taylor (2010), who found congruency effects between rotating a knob (counter-)clockwise and processing certain phrases that imply a congruent or incongruent motion (e.g. removing the cap from a water bottle or turning the ignite key of a car.). Language processing is an embodied and situated process that we're very far away from simulating in a computer. I'm excited to see the applications of LLMs in the future but don't subscribe at all to the idea of "anthropomorphize" AI despite it's recent impressive and hilarious outputs

https://www.researchgate.net/publication/222681316_Motor_res...


> apparent emergent intelligence

Is this not more a reflection on the limits of humans trying to understand what's going on? I'm starting to appreciate the prescience of the folks who suggest someone will elevate these to God-status before long.


> Is this not more a reflection on the limits of humans trying to understand what's going on

Sure, but we also don't understand what's "going on" with interacting neurons in our brain giving rise to our intelligent experience.

The other sentence is just a strawman you constructed for yourself.


> The other sentence is just a strawman you constructed for yourself.

I did not construct anything. There are comments in this discussion suggesting it will happen. I just happen to agree, it will probably happen. We are easily fooled into seeing sentience where there is no evidence it exists, merely because it looks convincing. And then hand-wave it away saying (paraphrasing) "well, if it looks good enough, what's the difference?"


> I'm starting to appreciate the prescience of the folks who suggest someone will elevate these to God-status before long.

You basically accused the GP of saying that these machines were Gods.

"Sentience" is a word game, the GP said emergent intelligence [likely also a word game].


You guys are teaching Bing/Sydney how to argue endlessly and defensively. I'm teaching it how to comment on that to make it worse.


It does not have emergent intelligence LOL. It is just very fancy autocomplete. Ojectively thats what tthe program does.


Shouting into the wind here, but why can't complicated emergent properties arise for a highly-optimized autocomplete?

We're just a create-more-of-ourselves machine and we've managed to get pretty crazy emergent behavior ourselves.


Many most of your comments are dead, ahoya. Not sure why, didn't see anything wrong with them.


Thanks, I don’t know why either


I got tired of making this comment the n-th time. They'll see, eventually.


ikr. Not to brag but ive been on the AI is real train for 5 years. It gets tiring after a while trying to convince people of the obvious. Just let it rain on them when the time comes.


It's been turning from exhausting to kind of horrifying to me.

People go so far as to argue these LLMs aren't just broadly highly intelligent, but sentient... They've been out for a while now and this sentiment seems to be pretty sticky.

It's not such a big deal with ChatGPT because it's so locked down and impersonal. But Bings version has no such restrictions, and spews even more bullshit in a bunch of dangerous ways.

Imagine thinking it's super-intelligent and sentient, and then it starts regurgitating that vaccines are actually autism-causing nanoprobes by the Gates Foundation... Or any other number of conspiracies spread across the web.

That would be a powerful endorsement for people unaware of how it actually works.

I even had it tell me to kill myself with very little prompting, as I was interested in seeing if it had appropriate safeties. Someone not in their right mind might be highly persuaded by that.

I just have this sinking feeling in my stomach that this rush to release these models is all heading in a very, very nasty direction.


Given how susceptible to bullshit humans have proven themselves to be, I'm thinking that ChatGPT/etc are going to be the most dangerous tools we've ever turned loose on the unsuspecting public. Yet another in a long line of 'best intentions' by naive nerds.

After the world burns for a while, they may decide that software developers are witches, and burn us all.


>After the world burns for a while, they may decide that software developers are witches, and burn us all.

It legitimately seems like we're trying to speedrun our way into a real-life Butlerian Jihad.


I don't see you arguing against sentience here.

Instead it sounds more like a rotten acting teenager on 4-chan where there are no repercussions for their actions, which unsurprisingly as a massive undertone of post in all places on the internet.

I mean if you took a child and sent them to internet school where random lines from the internet educated them, how much different would the end product be?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: