Hacker News new | past | comments | ask | show | jobs | submit | javajosh's favorites login

> Something like ChatGPT is hard to train to lie without it being demonstrable with experimental evidence.

See, I consider the functional definition of what current generation LLMs do to be "lying". Not out of malice or moral failing (of the model or the model's creators), but simply out of lack of control/alignment/modulation systems that people typically associate with honesty, integrity, and intention.

The problem-model behind these systems is "predict what comes next". The objective in training is not accuracy of content (by whatever definition...), but verisimilitude of output. An LLM happily improvises on whatever seed you give it. Sometimes this leads to output that aligns with someone's definition of reality. Sometimes it's hallucination or bullshit or garbage or whatever you term you want to use for it. You can modulate this with better prompt engineering, but only by degrees.

I have no doubt future generations of these systems will start tackling this, but the systems as they stand now are by their very nature, liars.

edit: And additionally, while I can't necessarily argue against your trust in these systems (especially given the choice you presented.. I'd probably choose XGPT too), "neutrality" is defined by a shared trust that I don't see a route to in the current climate.


I feel like the examples of log4j and ua-parser aren't that great, because it would be relatively easy for any other similar lib to take their place, as it's mostly straightforward to implement, even though it still takes time.

But there are some things like Kafka, PostgressSQL, Spring Boot, Tomcat, Apache Math, ZooKeeper, the OpenJDK, and all that which are definitely non-trivial and a huge amount of time and effort, and you couldn't just take an extra month or two and have a dev on your team implement a replacement, unlike log4j and ua-parser.

I think those would be better example to discuss, and my impression has been that those things often have a company behind them offering support or offering them as a service that in some ways pays for some real devs to contribute to them, but maybe I'm mistaken.

Like for example, the author mentions working on the GO team at Google, and Go I would consider one of those big open source projects that truly are foundational and would be non-trivial and huge effort to replace. So that shows that the really big pieces do have companies hired and paid staff behind them.


Is someone testing for HN censors? This wouldn't make it far on Facebook, Twitter, or Youtube.

one of the few bastions of unions left in america

Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: