Hacker News new | past | comments | ask | show | jobs | submit login

Yes, it seems like Bing is less effective at preventing these sort of devolutions as compared to ChatGpt.

Interestingly, this was often also a failure case for (much, much) smaller language models that I trained myself. I wonder what the cause is.




The cause seems to be baked into the underlying assumption that language is just a contextualized "stream of consciousness" that sometimes happens to describe external facts. This sort of is the endpoint of post-truth, relativistic thinking about consciousness. It's the opposite of starting with a Platonic ideal model of X and trying to describe it. It is fundamentally treating the last shadow on the wall as a stand-in for X and then iterating from that.

The result is a reasonable facsimile of paranoid schizophrenia.


Loved this comment. I too am bearish on the ability of this architecture of LLM to evolve beyond a mere chatbot.

That doesn't mean it's not useful as a search engine, for example.


> underlying assumption that language is just a contextualized "stream of consciousness" that sometimes happens to describe external facts

I'm the sort of person who believes this.

This said, I don't think it's 100% true. I just think it's a more useful approach than "starting with a Platonic ideal model of X and trying to describe it". And also... sort of underappreciated?

Maybe it's just the bubble I live in, but at least people around me — and people I see on the internet — seem to construct a lot of their arguments along the lines of "he/she does X because he/she thinks Y or been told Y". And it feels rather lonely to be the only person who doesn't like this approach all that much, and also doesn't seem to do this kind of thing internally much.

I met someone recently who spent several months working on a Twitter bot that was supposed to reply with fact-checking to Ukrainian war misinformation tweets. It felt like a rather misguided endeavor, but nobody else seemed to agree.

At least with ADHD I can point out "yeah, you have the capacity to decide on the course of action and then implement it, but this is not how I operate at all; and you can read about people like me on Reddit if you like". With this [other thing] there isn't a convenient place to point to.

Eh.


Not sure I fully understand your meaning. I'm not critiquing the use of language for building up abstractions. It's useful for that. Just that removing any underlying reality leaves the abstractions hallucinogenic and meaningless. Language evolved to communicate "this plant with red berries kills you". That involves color, classifications, and an abstract understanding of death; but all of those are rooted somehow in physically shared reality which was perceptible before we formulated a way to communicate it. Taking that sentence and abstracting further from it without symbols remaining pointers fixed at the physical realities of red, plant, berries or death, you end up with a hall of mirrors. That's insanity.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: