is that why he's talking up Dyson Spheres in interviews? the guy is a lunatic and conman, either completely insane or evil, no other option. here's the stupid quote:
Sam Altman: I do guess that a lot of the world gets covered in data centers over time.
Theo Von: Do you really?
Altman: But I don’t know, because maybe we put them in space. Like, maybe we build a big Dyson sphere around the solar system and say, “Hey, it actually makes no sense to put these on Earth.”
Why is that wrong? If like Altman you think that energy is the bottleneck to intelligence, and social and economic power grows with intelligence, then predicting that intelligence will optimize for energy collection seems reasonable. It isn't evil to predict that. And if he is insane to predict it, then I must be insane for not dismissing it.
Cassandra wasn't evil or crazy, she just had bad news.
But there is no proof more energy == more intelligence. In some areas I am smarter than the best ChatGPT model and my energy source is Taco Bell double deckers. Clearly there is a lot of low hanging fruit for efficiency before needing to encompass the entire sun and suck it dry of energy. It's an absurd thing to suggest. It's exactly the type of thing a conman would suggest. Something cool, fantastic and completely impossible to actually implement.
> you could do ca. 1200 prompts per double decker. Which is a lot.
If humans had evolved to do prompts (while retaining everything else that makes human thought human), that number doesn't sound that big.
OTOH, if LLMs had to do everything humans need energy for, that number would be waaay too big for LLMs.
----
Humans don't even have an efficient energy input system. How many of those 1.3M joules actually get assimilated? Silicon is a lot more efficient at energy, because a lot of effort has been put into making it so, and is fed raw energy. It doesn't need to process food like humans, humans already did that for it when they captured the energy in electricity.
----
I'm sure there's more ways of making the comparison more fair, but I doubt your parent was trying to prove their claim with such deep research. So let me try another angle: No human can burn through as much energy as the top hosted LLMs do for one prompt, in as much time.
A million people like you can do a million times more things that require intelligence†, and would consume a million times more Taco Bell double deckers. Ergo, in that limited sense, more energy == more intelligence.
"Over time" clearly means that he's talking about the far future, not the next few years when they're cleaning up the low-hanging fruit. Rather than "absurd ... fantastic and impossible" I would describe the Dyson-sphere outcome as inevitable, unless humanity goes extinct within a few centuries. Maybe you thought he meant next March?
In https://epoch.ai/gradient-updates/how-much-energy-does-chatg..., Josh You, Alex Erben, and Ege Erdil estimate that a typical GPT-4o query uses 0.3 watt hours, based on the estimates that it's a 400-billion-parameter MoE model in which ¼ of the parameters are activated for a given query, so each token requires 200 gigaflops (100 billion multiplies and 100 billion adds I guess), a typical query produces 500 output tokens (about a page of text), and they're running on 1-petaflop/s H100 GPUs, with the overall cluster consuming at peak 1500 watts per GPU, with a utilization rate of 10%, and that the cluster on average consumes 70% of peak power. This works out to 1.05 kilojoules, or about 0.25 kilocalories, the amount of food energy in one gram of carbohydrates.
So, a 320-kcal Double DeckerⓇ Taco™ works out to 1280 GPT-4o queries answered, and a standard 2000kcal/day diet works out to 8000 GPT-4o queries per day, if we believe You, Erben, and Erdil's estimate. For someone who is looking for GPT-4o-quality output, if you are producing less than 8000 pages of writing per day, you are less energy-efficient than GPT-4o.
______
† although there's no guarantee they would, and it's not obvious that they would be a million times more intelligent, or even 10% more intelligent; they might just post a million times more poorly-thought-out sarcastic comments on forums due to their poor impulse control, resulting in long pointless arguments where they insult each other and dismiss obviously correct ideas as absurd because they come from people they hate
If he really thinks the shortest path to building a synthetic brain is to build an entire Dyson sphere I would submit his bottleneck is the algorithm, not energy.
Because we are talking about likely outcomes, not optimizing for one tho to the exclusion of all else. Even if AGI is right around the corner (which is a pretty low percentage bet these days) cost alone would prevent such an outcome from being likely. Altman knows this, but being reasonable rarely sells.