Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's actually much worse than that and you're inadvertently down playing how bad it is.

It doesn't even know mildly obsecure facts that are on the internet.

For example last night I was trying to do something with C# generics and it confidently told me I could use pattern matching on the type in a switch statwmnt, and threw out some convincing looking code.

You can't, it's impossible. It wàa completely wrong. When I told that this, it told me I was right, and proceeded to give me code that was even more wrong.

This is an obscure, but well documented, part of the spec.

So it's not about facts that aren't on the internet, it's just bad at facts fullstop.

What it's good at is facts the internet agrees on. Unless the internet is wrong. Which is not always a good thing with the way the language it uses to speak is so confident.

If you want to fuck with AI models as a bunch of code questions on Reddit, GitHub and SO with example code saying 'can I do X'. The answer is no, but chatgpt/codepilot/etc. will start spewing out that nonsense as if it's fact.

As for non-proframming, we're about to see the birth of a new SEO movement of tricking AI models to believe your 'facts'.



I wonder though, is the documentation only referenced a few places on the Internet, and are there also many forums with people pasting "Why isn't this working?" problems?

If there are a lot of people pasting broken code, now the LLM has all these examples of broken code, which it doesn't know are that, and only a couple of references to documentation. Worse, a well trained LLM may realise that specs change, and that even documentation may not be considered 100% accurate (for it is older, out of date).

After all, how many times have you had something updated, an API, a language, a piece of software, but the docs weren't updates? Happens all the time, sadly.

So it may believe newer examples of code, such as the aforementioned pasted code, might be more correct than the docs.

Also, if people keep trying to solve the same issue again, and keep pasting those examples again, well...

I guess my point here is, hallucinations come from multi-faceted issues, one being "wrong examples are more plentiful than correct". Or even "there's just a lot of wrong examples".


Its not always the right tool depending on the task. IMO using LLMs is also a skill, much like learning how to Google stuff.

E.g. apparently C# generics isn’t something its good at. Interesting, so don’t use it for that, apparently its the wrong tool. In contrast, its amazing at C++ generics, and thus speeds up my productivity. So do use it for that!


> For example last night I was trying to do something with C# generics and it confidently told me I could use pattern matching on the type in a switch statwmnt, and threw out some convincing looking code.

Just use it on an instance instead

  var res = thing switch {
    OtherThing ot => …,
    int num => …,
    string s => …,
    _ => …
  };


>>>As for non-proframming, we're about to see the birth of a new SEO movement of tricking AI models to believe your 'facts'.

This is kinda crazy to think about.


If you ask Google Gemini right now for the name of the whale in half moon bay harbor it will tell you it’s called Teresa T.

That was thanks to my experiment in influencing AI search: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: