Helium supply issues are only going to make this worse.
I feel like for the first time in our lives we might have seen peak technology for the next few years. Everyone is going to have to make do instead of depending on ever increasing performance.
> Helium supply issues are only going to make this worse.
I believe helium, although important constitutes a small percent of the cost of semiconductors, so its effect on price will be less severe. It will be more noticeable in other uses of helium though - party balloons could get very expensive etc.
A hospital isn't going to shut down because their MRI's new helium load is getting more expensive - they'll pay a fortune for it. For a lot of other applications there are no suitable alternatives either.
The real question then becomes: what's going to happen when there's a 1000x price increase?
Reminds me of a demo my college physics professor did in our first class (presumably to get our attention).
He had two floating balloons, one about twice as big as the other. Pointed a blowtorch at the smaller one and it (of course) popped.
"That one was filled with helium. Now, there's only one gas less dense than helium..." and right as I thought to myself "he's not gonna do what I think he's gonna do", he pointed the blowtorch at the other balloon which exploded into a much larger (and much louder) fireball.
The problem is not that it's finite, the problem is that by the time prices rise enough to discourage people from using it frivolously, you might already be dangerously low on it.
This is a really interesting question. Is it? My intuition would say no since you have no inherent duty to protect or help others. I have no clue though.
There are alternatives to oil for energy, a lot of them. Helium is unique in its place in the universe, for the properties it possesses as an element. And once it's gone, it's gone. Hydrogen is similar but extremely volatile, where Helium is not volatile.
Helium could be made with nuclear fusion, but a 1 Gigawatt nuclear fusion plant would only produce 200kg of helium per year, so it's still not a viable path to make the quantities of helium we currently use. Current usage is almost 30 Million kg per year.
The helium that goes into balloons is mostly a byproduct of industrial grade helium production that would otherwise just go to waste. It's not pure enough for industrial uses.
You could always purify it, it's just uneconomic to do so at a smaller scale. But if the price rises enough, that will change and no one will be using helium for party balloons.
This one might last longer. The AI race is on, and the US tries its best to make it as expensive for China as possible to participate in it. Every dollar China spends on GPUs they get at markup is one not spent on building navy ships.
If there is an escalation over Taiwan, then that will cause the loss of most of the world's high grade chip manufacturing capacity. TSMC is busy doing technology transfers into the US, but it is going to take time, those fabs won't have capacity for the whole world, and they still heavily depend on Taiwan based engineers if something goes wrong etc.
Just like with COVID you don't know how long this shortage will last.
It will incredibly hard for China to conquer Taiwan. One hundred kilometers across the straits introduces a brutal geographic hurdle. If anything, the fabs will probably be severely damaged in the war. Plus most senior execs and elite engineers would be moved to US offices in Arizona.
We are going to have that now in a couple of months regardless. So it won't matter if Taiwan's manufacturing base gets disrupted, the hardware will have already effectively stopped.
Wow, I wasn't aware Samsung, Intel, SMSC were unable to produce "modern technology." Not everything needs to be on a 3nm TSMC process, believe it or not.
TSMC makes a lot of stuff besides the EUV-scale parts that all the YouTube videos talk about.
Almost everything you own that runs on electricity has some parts from Taiwan in it. TSMC alone makes MEMS components, CMOS image sensors, NVRAM, and mixed-signal/RF/analog parts to name a few.
Also, people seem to assume that TSMC is an autonomous entity that receives sand at one loading dock and ships wafers out at another. That's not how fabs work. Their processes depend on a continuous supply of exotic materials and proprietary maintenance support from other countries, many of them US-aligned. There is no need to booby-trap any equipment at TSMC; it will grind to an unrecoverable halt soon after the first Chinese soldier fires a rifle or launches a missile.
Hopefully Xi understands that. But some say it's a personal beef/legacy thing with him, and that he doesn't even care about TSMC.
Russia weren't able to take Ukraine even when they were able to just drive their tanks right up to Kiyv. Modern warfare tech just favors the defender too much. China has ninety km of sea to cross before they even get to Taiwan. Missiles and drones have already taken out the Russian naval fleet in the Black Sea. China will be losing a lot in the same way if they ever attempt the crossing.
That's what happens when consumer demand rapidly shifts, and businesses start panic-buying and panic-cancelling. As far as I recall, actual chip fab output didn't really change that much.
I ask ChatGPT about this. It says the root was demand collapse at the start of COVID. So fabs stopped producing the many low-end chips reqd for modern cars. They retooled/pivoted to higher-end chips. When auto manufs came back knocking after COVID, the fabs didn't want/need their biz of low-end chips.
Moore's law only really works when at least part of the world is functioning under practically ideal conditions. Right now that's far from what's happening.
Finally, good efficient code is going to get its moment to shine! Which will totally happen because it's not like 80% of the industry is vibe coding everything, right?
Yeah, I got the AI to convert some code that ran at 30fps in Javascript to C, and it resulted in a program that generated 1 frame every 20 seconds. Then I told it to optimize it, and now it's running at 1 fps. After going back and forth with the AI for hours, it never got faster than 1 fps. I guess I'm "doing it wrong" as the hypesters like to tell me.
> Yeah, I got the AI to convert some code that ran at 30fps in Javascript to C, and it resulted in a program that generated 1 frame every 20 seconds. Then I told it to optimize it, and now it's running at 1 fps. After going back and forth with the AI for hours, it never got faster than 1 fps. I guess I'm "doing it wrong" as the hypesters like to tell me.
Remove the "I actually only want a slideshow" instruction from your prompt :-)
speedrunning super mario world with neural nets is weirdly effective though. i guess you need a genetic algorithm to refine different approaches rather than a neural net.
Honestly speaking, it has started to look like AI coders could actually do a better job than 80% of app developers in writing efficient apps just by being set to adhere to best-practice programming conventions by default (notwithstanding their general tendency of trying to be too clever instead of writing clear and straightforward code).
This is my theory: we're going to see a lot of languages with straightforward and obvious semantics, high guard rails, terrible dx, and great memory allocation and performance behavior out of the box. Assembler or worse, but with extremely strong typing bolted on in a way that no human would ever tolerate, basically, something in that vibe.
Yeah actually I worked with Pascal early in my career and that's kinda the vibes I am thinking about, with maybe a stronger type system more ada-esque though (composite, partial and range-and-domain types, all that jazz)
I vibe coded a library in Nim the other day (a language I view very much as a spiritual continuation of the Pascal/Modula line), complete with a C ABI.
The language has well defined syntax, strong types, and I turned up the compiler strictness to the max, treat all warnings as errors etc. After a few hours I put the agent aside, committed to git then deleted everything and hand coded some parts from scratch.
I then compared the results. Found one or two bugs in the AI code but honestly, the rest of our differences were “maters of taste” (is a helper function actually justified here or not kind of things).
>I feel like for the first time in our lives we might have seen peak technology for the next few years.
This happened for a while with CPUs in 2004 or 2005, IIRC. At the end of the Pentium 4 era clock speeds and TDPs were so high that we hit a wall. Nobody was pushing past 4 GHz even with watercooling (I tried).
Dual-core processors were neither widely available nor mainstream yet, and those that were available had much lower clock speeds. It definitely felt like we hit a lull, or a stagnation, in those years. It picked back up with a fury when Intel released the Core 2 Duo in 2006, though.
Helium is almost all captured from gas wells by cryogenically liquefying the nitrogen out of it. I guess you could do technically do that with the fab's air but it is a LOT of volume of air to liquefy and likely costs more than even inflated helium prices.
Most helium from most wells is simply vented because it is expensive to separate even with its relatively high concentration, and I imagine even the best case scenario for capturing it from a fab has abysmal concentration of helium. But because most of it is vented it also means if the capital is put down to build more helium separators on gas wells it wouldn't take long to increase supply. Short term for a year or two it can be a problem, but beyond that it is simply a cost versus demand issue. There is neither a technological nor source limitation, it is a pure capital investment limitation.
> Helium is almost all captured from gas wells by cryogenically liquefying the nitrogen out of it.
This is wild. I never thought about how they separated gases from natural gas fields. The carbon footprint of each kg of that helium must be astonishingly large.
> Most helium from most wells is simply vented because it is expensive to separate even with its relatively high concentration
I remember a similar situation with neon early in the Ukraine invasion a few years ago. What I expect to happen is some other source coming online that currently doesn't try to capture it for economic reasons.
Helium recovery in scientific settings for cost saving reasons is already done, so it's not like there isn't expertise in using it.
Helium is actually pretty hard to keep ahold of, being a very light and small noble gas. It can diffuse through a surprising amount of materials, flow through far smaller cracks than you would expect, and is quite hard to filter out of a mixture of gases.
Also superfluid helium (a big chunk of helium used for refrigeration like in e.g. the LHC) has the weird property of flowing the same speed through a tiny hole as a large one and coating everything with a molecular coating. Superfluid helium is basically a bose einstein condensate but macro-scale, totally counterintuitive. Essentially a thermal superconductor. Zero viscosity.
AFAIK they recapture most, but recapturing all simply isn't possible / financially feasible. And they use a lot of helium, so even if they capture most of it, the losses are still higher than the currently available supply.
I love this, the perfect antidote to all the stupid startup-bro grind bullshit posts.
You put in real work to understand the business landscape and typical pain points. With AI, implementing solutions has become much easier but knowing what the problems are and how to solve them hasn't.
Many British people and Australians, even though our eggs are sold at room temperature and unwashed. I don't know why, but for most of us it 'feels wrong' to store eggs anywhere else.
> The authors find that height cannot, in fact, be used to predict changes in GDP. However, GDP can be used to predict changes in height. In other words, the study finds that extreme height is driven by rapid economic growth, but that height cannot be used as an indicator of recessions
AWS actually hosts the models. Security & isolation is part of the proposed value proposition for people and organizations that need to care about that sort of stuff.
It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.
I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer
I’ve been using Claude Code w/ bedrock for the last few weeks and it’s been pretty seamless. Only real friction is authenticating with AWS prior to a session.
Bedrock runs all their stuff in house and doesn’t send any data elsewhere or train on it which is great for organizations who already have data governance sign off with AWS.
Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation.
Worth noting that Kp, which many talk about in discussions online, is more or less useless for anyone in Australia or the southern hemisphere. Lots of beginner Aurora chasers here get tripped up by that.
What is useful is KAus and the G index, KAus is shown on this page, so thats what i'll be tracking.
Keep in mind that anyone posting on a forum (just like this), or so much more, anyone blogging about something is already a huge selection bias for people who believe that their opinion needs to be shared.
You don't hear from all the people who don't feel that others must know their opinion.
Lurkers always outweigh posters.
Don't ever make the mistake of believing that a sample of posts is a sample of people
Humans are tribal, which has both benefits and costs.
In technology, the historical benefits of evangelizing your favorite technology might just be that it becomes more popular and better supported.
Even though LLMs may or may not follow the same path, if you can get your fellow man on-board, then you'll have a shared frame of reference, and someone to talk through different usage scenarios.
You need to be fairly smart to be in tech. People who grew up smart and were told they were tend to view it as part of their self worth. If someone disagrees with this person later on, their self with has been attacked so of course they are going to lash out.
The worst thing you can say to a dev is they are wrong. Most will do everything in their power to prove otherwise, even on the dumbest of topics.
He’s the top comment on every AI thread because he is a high profile developer (invented Django) and now runs arguably the most information rich blog that exists on the topic of LLMs.
That’s not really reasonable to assume at all. Five minutes of research would give you a pretty strong indication of his character. The dude does not need to self-aggrandize; his reputation precedes.
Perhaps. But perhaps this era of AI slop leaves a foul taste in many people’s mouth. I don‘t know the reputation, all I see is somebody who felt the need to AI generate a picture and post it on HN. This is slop, and I personally get bad vibes from people who post AI generated slop, which leaves me with all sorts of assumptions about their character.
To clarify, they are here to have fun, they liked the joke about cow-ork (which I did too, it was a good joke), and they had an idea on how to build up on that joke. But instead of putting in a minor effort (like 5 min in Inkscape) they write a one sentence prompt to nano-banana and think everybody will love it. Personally I don’t.
If you can draw a cow and an ork on top of an Anthropic logo with five minutes in Inkscape in a way that clearly captures this particular joke then my hat is off to you.
I'm all in on LLMs for code and data extraction.
I never use them to write text for my own comments on forums so social media or my various personal blogs - those represent my own opinions and need to be in my own words.
I've recently started using them for some pieces of code documentation where there is little value to having a perspective or point of view.
My use of image generation models is exclusively for jokes, and this was a really good joke.
This really is unnecessarily harsh. As someone who's been reading Simon's blog for years and getting a lot of value from his insights and open source work, I'm sad to see such a snap dismissive judgement.
"all sorts of assumptions about [someone's] character" based on one post might not be a smart strategy in life.
I'd say is necessarily harsh. It is not as if Simon's opinions on AI were really better than others here that are as technical as his.
He is prolific, and being at the top of every HN thread is what makes him look like a reference but there are other 50+ people talking interesting things about AI that are not getting the deserved attention because every top AI thread we are discussing a pelican riding a bike.
He very obviously disclosed that he had nano banana generate the logo. Using AI to boost himself is a different animal altogether. (The difference is lying)
This is the Internet. Everyone here is an AI running in a simulator like the Matrix. How do I know you're not an AI? How do you know I'm not? I could be! Please, just use an em—dash when responding to this comment let me know you're AI.
I feel like for the first time in our lives we might have seen peak technology for the next few years. Everyone is going to have to make do instead of depending on ever increasing performance.
reply