US uniquely is suited to maximally benefit startups emerging in a new space, but maximally prevent startups entering a mature space. No smart, young person in the US matriculates into industries paved over by incumbents as they wisely anticipate that they will be in an industry deliberately hamstrung by regulatory capture.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.
I think there should be regulation that protects individuals and limits the power of incumbents. There are many regulations that ostensibly protect individuals but only exist to empower incumbents.
1. The first company to get AGI will likely have a multitude of high-leverage problems it would immediately put AGI to task on
2. One of those problems is simply improving itself. Another is securing that company's lead over its competitors (by basically helping every employee at that company do better at their job)
3. The company that reaches AGI for a language-style model will likely do so due to a mix of architectural tricks that can be applied to any general-purpose model, including chip design, tactical intelligence, persuasion, beating the stock market, etc
The AGI argument assumes there is a 0 -> 1 moment where the model suddenly becomes "AGI" and starts performing miraculous tasks, accompanied by recursive self-improvement. So far, our experience shows that we are getting incremental improvements over time from different companies.
These things are being commoditized, and we are still at the start of the curve when it comes to hardware, data centers, etc.
Arguing for an all-in civilization/country bet on AGI given this premise, is either foolish or a sign that you are selling "AGI"
It won't be one miraculous moment, but once a model is capable of performing tasks on its own and verifying its outputs better than a domain-expert using that AI, a human no longer being a bottleneck will allow this model to be deployed on a far larger scale, which would appear much like a 0-1 moment on the outside, even if the improvement over a previous model is minor and only involves a slight tweak of something that increases reliability.
All of that stuff takes time and resources. Self-improvement may not be easy, e.g. if they end up in a local maximum that doesn't extend, and it probably won't be cheap or fast (if it's anything like frontier LLMs it could be months of computation on enormous numbers of cutting-edge devices, costing hundreds of millions or billions, or it may not even be possible without inventing and mass-manufacturing better hardware). Another company achieving a slightly different form of AGI within a few years will probably be at least competitive, and if they have more resources or a better design they could overtake.
All the major companies are setting up to have all the hardware necessary to leverage an AGI-level system when it does emerge. The entire purpose of Starbase for OpenAI or the memphis supercomputer for X.ai is such that there would be very little delay between hitting the right marks on an AGI-capable model and deploying its capabilities en masse towards the highest-leverage aims. Certainly there will be bottlenecks, but the advantages will significantly accelerate progress.
Unless AGI includes a speed requirement, AGI is not sufficient to win the market. Take any genius in human history, the impact they had has been hugely limited by their lifespan, they didn’t solve every problem, and each discovery took them decades. The first AGIs will be the same, hyper slow for a while, giving competitors a chance to copy and stay in the race
If Einstein could replicate himself 1000x, perform all his thought experiments in parallel, not have to eat, sleep, or be limited by his human short-term memory, he would have likely have accomplished all he did far faster. AGI will at the very minimum have all the advantages of humans plus all the advantages of computers, which would be a huge automatic boon even without the fact that an AGI at human level would likely be able to scale even further.
Every frontier lab has maybe 100 top tier researchers, all of whom have limited short-term memory, need to sleep, eat, have biases, and can't rigorously parallelize themselves and run thousands of simultaneous experiments, while also modifying their own internal thought processes to a quantifiable degree. An AGI merely on the level of a mediocre AI researcher would be able to trivially leverage all these benefits, while also simply scaling its own compute (it is unlikely that the scaling gains would level out precisely at the level of a smart human)
The argument is something like AGI or its owner wouldn't want other AGIs to exist. So it would destroy the capabilities of other AGI before it could evolve(by things like hacking, manipulation etc.).
Far more is being done than simply throwing more GPU's at the problem.
GPT-5 required less compute to train than GPT-4.5. Data, RL, architectural improvements, etc. all contribute to the rate of improvement we're seeing now.
Because intelligence is so much more than stochastically repeating stuff you've been trained on.
It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.
We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?
Nope. Can't learn anything after the training data, only within the very narrow context window.
Any novel connections are through randomness, hence hallucinations instead of useful connections with background knowledge of involved systems or concepts.
About creativity, see my previous point. If I spit out words that go next to eachother, it won't be creativity. Creativity implies a goal, a purpose, or sometimes by chance, but utilising systematic thinking with understanding of the world.
I was considering refuting this point by point, but it seems your mind is already made up.
I feel that many people who deny the current utility and abilities of large-language models will continue to do so far after they've exceeded human intelligence, because the perception that they are fundamentally limited, regardless of whether they actually are or if their criticisms make any sense, is necessary for some load-bearing part of their sanity.
If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.
Humans who repeatedly deny LLM capabilities despite the numerous milestones they've surpassed seem more like stochastic parrots.
The same arguments are always brought up, often short pithy one-liners without much clarification. It seems silly that despite this argument first emerging when LLM's could barely write functional code, now that LLM's have reached gold-medal performance on the IMO, it is still being made with little interrogation into its potential faults, or clarification on the precise boundary of intelligence LLM's will never be able to cross.
> Claim: gpt-5-pro can prove new interesting mathematics.
>Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct.
I have seen no credible explanation on how current or proposed technology can possibly achieve AGI.
If you want to hand-wave that away by stating that any company with technology capable of achieving AGI would guard it as the most valuable trade secret in history, then fine. Even if we assume that AGI-capable technology exists in secret somewhere, I've seen no credible explanation from any organization on how they plan to control an AGI and reliably convince it to produce useful work (rather than the AGI just turning into a real-life SHODAN). An uncontrollable AGI would be, at best, functionally useless.
AGI is --- and for the foreseeable future, will continue to be --- science fiction.
You seem to have two separate claims. The first that it would be difficult to achieve AGI with current or proposed technology, and the second being that it would be difficult to control AGI, thus making it too risky to use or deploy.
The second is a significant open problem (the alignment problem) and I'd wager it is a very real risk which companies need to take more seriously. However, whether it would be feasible to control or direct an AGI towards reliably safe, useful outputs has no bearing on whether reaching AGI is possible via current methods. Current scaling gains and the rate of improvement (see METR's horizons on work an AI model can do reliably on its own) make it fairly plausible, at least more plausible than the plain denial that AGI is possible I see around here with very little evidence.
It is a reasonable common-sense claim that if a company which possesses a model that can perform all cognitive tasks as well as a human or better, that that model would be more powerful than any other technology available, barring significant limitations to its operation or deployment.
We have an AI promoter here. AGI isn't the future of anything right now. It could be. But so could a lot of things, like vaccine research (we're making promising development on HIV and cancer). Try saying those people would own the future in the 1980s-1990s. Sure, that'd be an obvious outcome, but it wasn't on the horizon for the people in the field at the time (unless your family owned the company).
Even if you could cure cancer or HIV with a vaccine it would have a relatively negligible impact compared to AGI.
There are far more signals that AGI is going to be achieved by OpenAI, Anthropic, DeepMind or X.ai within the next 5-10 years than there were of any other hyped breakthrough in the past 100 years that ultimately never came to fruition. Doesn't mean it's guaranteed to happen, but to ignore the multitude of trends which show no signs of stopping, it seems naive in Anno Domani 2025 to discount it as a likely possibility.
It's just as possible that they need to invest more and more for negligible improvements to model performance. These companies are burning through money at an astonishing rate.
And as the internet deteriorates due to AI slop, finding good training material will become increasingly difficult. It's already happening that incorrect AI generated information is being cited as source for new AI answers.
They are burning through money, but their revenue is scaling at a similar rate.
I'm sure most companies have understood the "AI outputs feeding AI's" incest issue for a while and have many methods to avoiding it. That's why so much has been put into synthetic data pipelines for years.
Well, not surprising, but the latest LLMs really do get the gist of your joke attempt. Here's a plain, unauthenticated chatgpt reply:
That post — “I rearry rove a ripe strawberry” — is a playful way of writing “I really love a ripe strawberry.”
The exaggerated misspelling (“rearrry rove”) mimics the way a stereotyped “Engrish” or “Japanese accent” might sound when pronouncing English words — replacing L sounds with R sounds.
So, the user was most likely joking or being silly, trying to sound cute or imitate a certain meme style. However, it’s worth noting that while this kind of humor can be lighthearted, it can also come across as racially insensitive, since it plays on stereotypes of how East Asian people speak English.
In short:
Literal meaning: They love ripe strawberries.
Tone/intention: Playful or meme-style exaggeration.
Potential issue: It relies on a racialized speech stereotype, so it can be offensive depending on context.
The same way that nothing can be built anymore in America
Every framework is like a regulation, something which solves an ostensible problem but builds up a rot of inefficiency that is not visible. The more frameworks, layers upon layers needed to make an application, the more it becomes slow, small errors are ignored, abstractions obfuscate actual functionality.
Each layer promises efficiency but adds hidden coordination cost. Ten years ago, a web app meant a framework and a database. Now it’s React → Electron → Chromium → Docker → Kubernetes → managed DB → API gateway - six layers deep to print “Hello, world.”
Every abstraction hides just enough detail to make debugging impossible. We’ve traded control for convenience, and now no one owns the full stack - just their slice of the slowdown.
I’ve recently been dealing with scaling one of those “framework and a database” web apps for a company that’s growing fast and hit scaling limits. You know what the easiest way to scale it is? Containerize it and deploy it on Kubernetes with a managed DB.
If you don’t recognize that, it may be because you don’t work with applications that need that scale. In that case, you might get away with simpler approaches. But if you expect to grow significantly, you can save a lot of money and pain by designing the app to scale well from the beginning.
Yeah, in the same way that CEOs, founders are given all the credit for their company's breakthroughs, scientists who effectively package a collection of small breakthroughs are given all the credit for each individual advancement that lead to it. It makes sense though, humans prioritize the whole over the pieces, the label of the contents.
This just seems like "burnout" simulator. What makes it unique to having autism vs being overworked in a job you hate in an alienating urban environment not congenial to human thriving?
Everyone would rather be cozy on the couch under a warm blanket than wake up at 6:30AM every day before commuting to type meaningless stuff at a computer desk, be exposed to a sensory environment that is far from ideal, and converse with people they would never associate with if they didn't have to. The experience of the wage worker is a universally reviled existence that is far from a unique plight afflicting those with high-functioning autism.
Is the implication that someone without autism could deal with all these stressors effortlessly with no need to think or put any effort in?
I answered this as somebody with 20+ years in this industry. I burned out instantly.
I had my wife do it, as a stay at home wife. She still burned out and has no reason too. She made it 6 questions. She said she wouldn't have chosen half of the optional questions.
Everyone's different. Some people genuinely thrive under the conditions you're describing, others don't like it but are able to put up with it no problem, and others can't stand it but are forced to.
The perspective I've found most useful is this. There is a constellation of correlated "autistic traits", which everyone has to a degree, but which like most traits become disabling if turned up too much. "Autism" is a term describing that state. So, it is much less a particular switch that can be turned on or off, not even a slider between "not autistic" and "very autistic", but more a blurry region at the outskirts of the multidimensional bell curve of the human experience.
People on the furthermost reaches of this space are seriously, unambiguously disabled, by any definition. They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas. Whether this is a good or a bad thing is a social question, not a scientific one, in my opinion. Most of us want to live in a society that supports disabled people, but how many resources to allocate to that is a difficult question where our human instincts seem to clash with the reality of living in a modern society.
On your last paragraph: I think this is a serious problem with the discourse around neuroatypicality today. My opinion is that the important thing is that we become more accepting and aware of the diversity of the human experience, and that this is a necessary social force to balance the constant regression to the mean imposed by modernity. If that's the case, then drawing a border around any category of person, staking a territorial claim to a pattern of difficulty the group experiences, and refusing to accept that the pattern exists beyond it: it's just unfair, it's giving into defensiveness.
> They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas.
There has also been a change that reclassified what we would previously have termed Asperger’s Syndrome as Autism. To be clear, AS was always considered to be a form of or closely related to Autism, but that change in language does mean we’ve had a big shift in what is Autism medically and what the public pictures when they think Autism
The key difference here is magnitude and mechanism. For autistic people, even "normal" sensory inputs or social interactions can cause physical discomfort, confusion
If the expectation of the job is to "type meaningless stuff at a computer desk", doesn't this point to a problem with the expectations of the role? I would submit that if the work is truly meaningless, and it often is in my experience, it doesn't need to be done. Of course anyone would choose a pleasurable activity over meaningless, mundane busy work - regardless of their unique expression of the autism spectrum.
I also think that there are many wage workers who do not revile that existence. My intuition says it is more common in "office jobs".
I think the implication is that someone without autism can recover from these stressors more easily. And they tend to be able to absorb these stressors with less of an impact on their mood. People without autism have more control over when their brain is engaged with something, and have to expend less effort when exerting that control. It's not just about physical energy.
The brain of a person dealing with these types of symptoms is kind of like an engine running near red-line 99% of the time. When someone is masking, for every thought they express, there were likely dozens you didn't hear or see expressed over the course of a short social interaction.
Other times, they are caught in mental loops. Reading the same line of text over and over, or replaying someone else's comment over and over, and not comprehending because of an auditory stimulus that is monopolizing the comprehension processes within the brain. When this happens, it's easy for them to miss important context or body language when working with others. That requires even more masking to cover up because it's a social faux pas to admit you missed something important. So then your brain goes into overdrive trying to derive the missed information from followup conversation.
Using sustained, intense thinking to overcome challenges that others don't encounter as often can become the default coping mechanism for this kind of thing. It's not something that is easily noticed, because it's part of masking, but it tends to be more draining than many people realize.
I think we are still in the period where many new jobs are being created due to AI, and AI models are chiefly a labor enhancer, not a labor replacer. It is inevitable though, if current trends continue (the METR eval and GDPval) that AI models will be labor replacements in many fields, starting with jobs that are oriented around close-ended tasks (customer service reps, HR, designers, accountants), before expanding to jobs with longer and longer task horizons.
The only way this won't happen is if at some point AI models just stop getting smarter and more autonomously capable despite every AI lab's research and engineering effort.
AI coding is more like being a PM than an engineer. Obviously PM’s exist and don’t know as much about the tech they make as the engineers, but are nevertheless useful.
People pattern match with a very low-resolution view of the world (web3/crypt/nfts were a bubble because there was hype, so there must be a bubble since AI is hyped! I am very smart) and fail to reckon with the very real ways in which AI is fundamentally different.
Also I think people do understand just how big of a deal AI is but don't want to accept it or at least publicly admit it because they are scared for a number of reasons, least of all being human irrelevance.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.