Hacker Newsnew | past | comments | ask | show | jobs | submit | sigilis's commentslogin

There are various links in the article that have more information. Clicking these references will give the evidence for bad unit economics claims and whatnot.

As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.


The papers he linked all fail to support his claim. The first paper he linked simply counts the mentions of the term “deep learning” in papers. The 2nd surveyed people who lived in… Denmark and tried to extrapolate that to everyone globally

His points are not backed by much evidence


The first link is a mistake. It's supposed to be the thing being discussed here: https://news.ycombinator.com/item?id=45170164.

The 2nd link seems reasonable to me? Why does a study about 25k workers in Denmark (11 occupations, 7k workplaces) not count as evidence? If there was a strong effect to be found globally, it seems likely to be found in Denmark too.

Also, what about the other links? The discussions about the strange accounting and lack of profitability seem like evidence as well.

If anything, this article struck me as well-evidenced.


Side note: If you're going to short an AI company (or really, buy put options, so you don't have unlimited downside exposure), I would suggest shorting NVIDIA. My reasoning is that if we actually get a fully automated software engineer, NVIDIA stock is liable to lose a bunch of value anyways -- if I understand correctly, their moat is mostly in software.

Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.

Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.

Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?


NVIDIA is like the only company making money on the AI bubble, they're not the one I would choose to short.

Tesla is currently trading at 260x earnings, so to actually meet that valuation they need to increase earnings by a factor of 10 pretty sharpish.

They're literally not going to do that by selling cars, even if you include Robotaxis, so really it is a bet on the Optimus robots going as well as they possibly can.

If they make $25k profit per Optimus robot (optimistic) then I think they need to sell about a million per year to make enough money to justify their valuation. Of a product that is not even ready to sell, let alone finding out how much demand their truly is, ramping up production, etc.

For comparison the entire industrial robot market is currently about 500k units per year.

I think the market is pricing in absurdly optimistic performance for Tesla, which they're not going to be able to meet.

(I have a tiny short position in Tesla).


Tesla has been overpriced for ages though, correct?


Task selection is the tricky bit. It has to actually be important in some dimension. The easiest is something with an amount of social pressure. If someone is waiting for you to do something that you have promised, then it acquires a kind of urgency and importance even if it wouldn’t harm you not to do it in a timely manner.

It’s not fake importance, it’s just taking advantage of the fact that you want to be seen as dependable and effective to other people.


> If someone is waiting for you to do something that you have promised, then it acquires a kind of urgency and importance even if it wouldn’t harm you not to do it in a timely manner.

I don't agree with this though. If someone is waiting for me to do something that I've promised, and I don't do it, I'm going to suffer the harm of stress, guilt, shame, etc. related to breaking my promise and people thinking I'm unreliable. I think this idea only works if we define "harm" in a very narrow sense to exclude the types of harms that come from the "important" task that we're going to deliberately avoid doing.


You are correct. This strategy is not for making you happy with your procrastination. The main goal is to make you an effective human being. As a result, this excludes personal emotional effects from the definition of harm.

Furthermore, what an effective human is also something that you have to define for yourself.

Procrastination is considered a negative trait for a reason.


“Do this or else we’ll make our products less attractive by not making an effort to comply with your law” does not seem like a really compelling “or else”.

iPhone mirroring was cute when I first tried it, but now when I click on a notification on my laptop and it tries to open the mirroring application I am annoyed. It’s the tiniest version of my phone and takes a while to come up, if it doesn’t fail for some reason. I should turn it off.

The other features they mention aren’t very compelling. I’m in the Netherlands now with a US Apple account so I can use them, but don’t care to.


The structure of giant corporations today is like those centralized societies that were so inefficient in your example. The mandates to put AI in everything are one example of out of touch leadership throwing money and effort blindly towards things of dubious value. The sycophantic managers, afraid that they will be eliminated for insufficient fervor for the board’s latest fascination, will seize upon anything to prove themselves loyal and useful to those above them.

By moving the locus of control, whether it be considered the ceo or shareholders, so far from the actual business and implementing mandates based on whatever the current fancy is and meaningless targets of growth on such a giant scale you get the same sort of excesses.

The current system is marked by irrationality and uninformed and ill considered decision making. With smaller organizations and actual business competition they would be held to account by their competitors or just by running out of money before something catastrophic for the greater economy happened.


This is an excellent point. Thank you for posting it.

Large monopolistic mega-corporations do tend to have the same issues that one would see in the old 20th century planned economies like the Soviet Union.


Don’t they call all their LLM models Gemini? The paper indicates that they specifically used all the AI models to come up with this figure when they describe the methodology. It looks like they even include classification and search models in this estimate.

I’m inclined to believe that they are issuing a misleading figure here, myself.


They reuse the word here for a product, not a model. It's the name of a specific product surface. There is no single model and the models used change over time and for different requests


So it includes both tiny models and large models?


I would assume so. One important trend is that models have gotten more intelligent for the same size, so for a given product you can use a smaller model.

Again this is pretty similar to how CPUs have changed


So it's not a specific product doing a specific thing, but the average across different things?


“Gemini App” would be the specific Gemini App in the App Store. Why would it be anything different?


I find myself yelling at AWS pretty often, does that make me old?


The old men yell at physical (bare metal even) clouds, not AWS :)


This statement is not excusing the advertising practices, just that advertising their product has likely had a positive effect on the returns from the film.

They are calling out the tactics as “in-your-face”. This is hardly an endorsement.


The importance of the system in question is not a factor in whether something is a security bug for a dependency. The threat model of the important system should preclude it from using dependencies that are not developed with a similar security paradigm. Libxml2 simplly operates under a different regime than, as an arbitrary example, the nuclear infrastructure of a country.

The library isn't a worm, it does not find its way into anything. If the bank cares about security they will write their own, use a library that has been audited for such issues, sponsor the development, or use the software provided as is.

You may rejoin with the fact that it could find its way into a project as a dependency of something else. The same arguments apply at any level.

If those systems crash because they balanced their entire business on code written by randos who contribute to an open source project then the organizations in question will have to deal with the consequences. If they want better, they can do what everyone is entitled to: they can contribute to, make, or pay for something better.


None of that addresses the point I made. DoS is a security bug. It doesn't matter with who or where the problem lies.


They pay their lawyers and whoever made this page a lot for the express purpose of credibly arguing that it is very clearly totally legal and very cool to use of any IP they want to train their models.

Could you with a straight face argue that the NYT newspaper could be a surrogate girlfriend for you like a GPT can be? They maintain that it is obviously a transformative use and therefore not an infringement of copyright. You and I may disagree with this assertion, but you can see how they could see this as baseless, ridiculous, and frivolous when their livelihoods depend on that being the case.


The context of the scenario, the meme if you will, is that there is no fire. You're just shouting it for the fun of it, to see the crush of panicked bodies running just because you willed it. That is why it was used as an example of speech that is harmful.

Check out https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...


The context of the scenario is you are a war protestor who's telling people they should resist the draft. Obviously protected political speech, which was outlawed using the "shouting fire in a crowded theater" argument.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: