Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s past the hype curve and into the trough of disillusionment. Over the next 5,10,15 years (who can say?) the tech will mature out of the trough into general adoption.

GenAI is the exciting new tech currently riding the initial hype spike. This will die down into the trough of disillusionment as well, probably sometime next year. Like self-driving, people will continue to innovate in the space and the tech will be developed towards general adoption.

We saw the same during crypto hype, though that could be construed as more of a snake oil type event.



The Gartner hype cycle assumes a single fundamental technical breakthrough, and describes the process of the market figuring out what it is and isn't good for. This isn't straightforwardly applicable to LLMs because the question of what they're good for is a moving target; the foundation models are actually getting more capable every few months, which wasn't true of cryptocurrency or self-driving cars. At least some people who overestimate what current LLMs can do won't have the chance to find out that they're wrong, because by the time they would have reached the trough of disillusionment, LLM capabilities will have caught up to their expectations.

If and when LLM scaling stalls out, then you'd expect a Gartner hype cycle to occur from there (because people won't realize right away that there won't be further capability gains), but that hasn't happened yet (or if it has, it's too recent to be visible yet) and I see no reason to be confident that it will happen at any particular time in the medium term.

If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?

Lots of pre-internet technologies went through this curve. PCs during the clock speed race, aircraft before that during the aeronautics surge of the 50s, cars when Detroit was in its heydays. In fact, cloud computing was enabled by the breakthroughs in PCs which allowed commodity computing to be architected in a way to compete with mainframes and servers of the era. Even the original industrial revolution was actually a 200-year ish period where mechanization became better and better understood.

Personally I've always been a bit confused about the Gartner Hype Cycle and its usage by pundits in online comments. As you say it applies to point changes in technology but many technological revolutions have created academic, social, and economic conditions that lead to a flywheel of innovation up until some point on an envisioned sigmoid curve where the innovation flattens out. I've never understood how the hype cycle fits into that and why it's invoked so much in online discussions. I wonder if folks who have business school exposure can answer this question better.


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like.

We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

The improvements we are seeing are related to engineering and value added services. This is why "agents" are the latest buzzword most marketing is clinging on. This is expected, and good, in a sense. The tech is starting to deliver actual value as it's maturing.

I reckon AI companies can still squeeze out a few years of good engineering around the current generation of tools. The question is what happens if there are no ML breakthroughs in that time. The industry desperately needs them for the promise of ASI, AI 2027, and the rest of the hyped predictions to become reality. Otherwise it will be a rough time when the bubble actually bursts.

[1]: https://llm-stats.com/


The problem with LLMs and all other modern statistical large-data-driven solutions’ approach is that it tries to collapse the entire problem space of general problem solving to combinatorial search of the permutations of previously solved problems. Yes, this approach works well for many problems as we can see with the results with huge amount of data and processing utilized.

One implicit assumption is that all problems can be solved with some permutations of existing solutions. The other assumption is the approach can find those permutations and can do so efficiently.

Essentially, the true-believers want you to think that rearranging some bits in their cloud will find all the answers to the universe. I am sure Socrates would not find that a good place to stop the investigation.


Right. I do think that just the capability to find and generate interesting patterns from existing data can be very valuable. It has many applications in many fields, and can genuinely be transformative for society.

But, yeah, the question is whether that approach can be defined as intelligence, and whether it can be applicable to all problems and tasks. I'm highly skeptical of this, but it will be interesting to see how it plays out.

I'm more concerned about the problems and dangers of this tech today, than whatever some entrepreneurs are promising for the future.


> We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

This isnt just a software problem. IF you go look at the hardware side you see that same flat line (IPC is flat generation over generation). There are also power and heat problems that are going to require some rather exotic and creative solutions if companies are looking to hardware for gains.


The Gartner hype cycle is complete nonsense, it's just a completely fabricated way to view the world that helps sell Gartner's research products. It may, at times, make "intuitive sense", but so does astrology.

The hype cycle has no mathematical basis whatsoever. It's marketing gimmick. It's only value in my life has been to quickly identify people that don't really understand models or larger trends in technology.

I continue to be, but on introspection probably shouldn't be, surprised that people on HN treat is as some kind of gospel. The only people who should respected are other people in the research marketing space as the perfect example of how to dupe people into paying for your "insights".


Could you please expand on your point about expanding scopes? I am waiting earnestly for all the cheaper services that these expansions promise. You know cheaper white-collar-services like accounting, tax, and healthcare etc. The last reports saw accelerating service inflation. Someone is lying. Please tell me who.


Hence why I said potential applications. Each new generation of models is capable, according to evaluations, of doing things that previous models couldn't that prima facie have potential commercial applications (e.g., because they are similar to things that humans get paid to do today). Not all of them will necessarily work out commercially at that capability level; that's what the Gartner hype cycle is about. But because LLM capabilities are a moving target, it's hard to tell the difference between things that aren't commercialized yet because the foundation models can't handle all the requirements, vs. because commercializing things takes time (and the most knowledgeable AI researchers aren't working on it because they're too busy training the next generation of foundation models).


It sounds like people should just ignore those pesky ROI questions. In the long run, we are all dead so let’s just invest now and worry about the actual low level details of delivering on the economy-wide efficiency later.

As capital allocators, we can just keep threatening the worker class with replacing their jobs with LLMs to keep the wages low and have some fun playing monopoly in the meantime. Also, we get to hire these super smart AI researchers people (aka the smartest and most valuable minds in the world) and hold the greatest trophies. We win. End of story.


It's saving healthcare costs for those who solved their problem and never go in which would not be reflected in service inflation costs.


Back in my youthful days, educated and informed people chastised using the internet to self-diagnose and self-treat. I completely missed the memo on when it became a good idea to do so with LLMs.

Which model should I ask about this vague pain I have been having in my left hip? Will my insurance cover the model service subscription? Also, my inner thigh skin looks a bit bruised. Not sure what’s going on? Does the chat interface allow me to upload a picture of it? It won’t train on my photos right?


> or if it has, it's too recent to be visible yet

It's very visible.

Silicon Valley, and VC money has a proven formula. Bet on founders and their ideas, deliver them and get rich. Everyone knows the game, we all get it.

Thats how things were going till recently. Then FB came in and threw money at people and they all jumped ship. Google did the same. These are two companies famous for throwing money at things (Oculus, metaverse, G+, quantum computing) and right and proper face planting with them.

Do you really think that any of these people believe deep down that they are going to have some big breath through? Or do you think they all see the writing on the wall and are taking the payday where they can get it?


It doesn't have to be "or". It's entirely possible that AI researchers both believe AI breakthroughs are coming and also act in their own financial self interest by taking a lucrative job offer.


Liquidity in search of the biggest holes in the ground. Whoever can dig the biggest holes wins. Why or what you get out of digging the holes? Who cares.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: