Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chat GPT likes to make up things.

It told me the other day that on a multiple hard drive/SSD system I could set secure boot on each drive independently.

Of course this is nonsense, since secure boot is set in the BIOS.

Whatever, Chat GPT got it's engagement metrics.

I'm going to predict within the next few years someone's going to lose a billion dollars relying on Chat GPT or another LLM.

It's still at the level of an energetic junior engineer, sure it wants to crank out a lot of code to look good, but you need to verify everything it does.

I was game jamming with a friend last weekend and I realized he can manually write better code, lighter code, more effective code than I was having co-pilot write.

Which sounds safer, an elegant 50 line function, or 300 lines of a spaghetti code that appears to work right.

The manager( and above)level is all about AI though, let's cut staff and have AI fill in the gaps!



> I'm going to predict within the next few years someone's going to lose a billion dollars relying on Chat GPT or another LLM.

I strongly suspect this has already happened, possibly even multiple times. Some inbred oil sheikh fatfingering some insane sum of money because "the computer told me, and computers are always correct" is completely believable scenario. Now that I think about it, doesn't Line city fits it perfectly? :)

What will happen in the next few years is probably not just a cash loss, but some large industrial accident with dead people, due to relying on the LLM bullshit. Now that would make headlines (at least until next Truth Social post).


>Some inbred oil sheikh fatfingering some insane sum of money because "the computer told me, and computers are always correct" is completely believable scenario.

No need to degrade people, if anything the highest class with hop on the AI train. Why trust an expensive human when an LLM will give you unlimited "advice" ?

LLMs can serve as a fall guy, but what's the old IBM quote. Something like a computer will never be responsible for an executive decision.

Then again LLMS making stupid mistakes is the only thing keeping most of us employed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: