Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an interesting Altman quote on the site. LLMs cannot be compared to electricity and the Internet. People wanted those. LLMs were an impressive parlor trick at first but disappointing later. Many stopped using them altogether.

Now there is a president who fuels the hype, shakes down rich countries for "AI" investments. The Saudi prince who lost money on Twitter is in for the new grift and praises Musk on Tucker Carlson.

The grift-oriented economy might continue with the bailout of Bitcoin whales through the "sovereign wealth fund" scheme.

That is how the "economy" works. No houses will be built and nothing of value will be created.



People have stopped using LLMs? I wasn't aware of that. Can you share a source for that?


Anecdotal, but this is the exact consensus I saw among my non-tech peers. They find it fun for a few days or weeks, then basically never touch it again once the novelty wears off. The only normies I know still using LLMs are students using them to write papers.


I know a lot of people who went through the "Oh, wow - wait a minute..." cycle. Including me.

They're approximately useful in some contexts. But those contexts are limited. And if there are facts or code involved, both require manual confirmation.

They're ideal for bullshit jobs - low-stakes corporate makework, such as mediocre ad copy and generic reports that no one is ever going to read.


> And if there are facts or code involved, both require manual confirmation.

The hidden assumption here seems to be that the model needs to be perfect before it has utility.


Now you're the bullshit machine. No one said that. We expect basic reliability/reproducibility. A $4 drugstore calculator has that to about a dozen 9s, every single time. These machines will give you a correct answer and walk it right back if you respond the "wrong" way. They're not just wrong a lot of the time, they simply have no idea even when they're right. Your strawman is of no value here.


Also hidden assumption, or perhaps lack of clear perception of reality, that most jobs on the market are strongly dependent on factual correctness.

Also assumption that this is any different than human relationship with empirical truth is.


Clearly generative AI can currently only be used when verification is easy. A good example is software. Not sure why you think that I claimed otherwise.


In Similarweb's list of top websites chatgpt.com is now at no 6 above x/twitter and yahoo

US iPhone apps the top two are deepseek and chatgpt

That doesn't really say people have stopped using LLMs

https://x.com/Similarweb/status/1888599585582370832




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: