Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you reason about the energy consumption/climate impact of feeding the same question to three models? Im not saying there is a clear answer here, would just be interesting to hear your thinking.


How much energy does an AI model use during inferencing versus a human being?

This is a rhetorical question.

Sure we aren’t capturing every last externality, but optimization of large systems should be pushed toward the creators and operators of those systems. Customers shouldn’t have to validate environmental impact every time they spend 0.05 dollars to use a machine.


I actually did the math on this last year some time. For gpt4 or so. Attempted to derive a per-user energy use value. Based on known data LLM training used many hundreds of times the energy use of agriculture and transport costs to feed a human to do equivalent mental work. Inference was much lower. But the climate critique of AI doesn’t distinguish.


100x more inefficient than a human in only food is pretty efficient. Consider that humans in the developed world spend far more in energy on heating/AC, transportation, housing, lawn care, refrigeration, washers and dryers, etc, and an LLM can probably be several factors more efficient.

I don't really understand the critique of GPT-4 in particular. GPT-4 cost >$100 Million to train. But likely less than 1 billion. Even if they pissed out $100 million in pure greenhouse gases, that'd be a drop in the bucket compared to, say 1/1000 of the US military's contributions


That sounds on the low side?

Does that "hundreds" include the cost of training one human to do the work, or enough humans to do the full range of tasks that an LLM can do? It's not like-for-like unless it's the full range of capabilities.

Given the training gets amortised over all uses until the model becomes obsolete (IDK, let's say 9 months?), I'd say details like this do matter — while I want the creation to be climate friendly just in its own right anyway, once it's made, greater or lesser use does very little:

As a rough guess, let's say that any given extra use of a model is roughly equivalent to turning API costs into kWh of electricity. So, at energy cost of $0.1/kWh, GPT-4.1-mini is currently about 62,500 tokens per kWh.

IDK the typical speed of human thought (and it probably doesn't map well to tokens), but for the sake of a rough guide, I think most people reading a book of that length would take something around 3 hours? Which means if the models burn electricity at about 333 W, they equal the performance (speed) of a human, whose biological requirements are on average 100 W… except 100 W is what you get from dividing 2065 kcal by 24h, and humans not only sleep, but object to working all waking hours 7 days a week, so those 3 hours of wall-clock time come with about 9 hours of down-time (40 hour work week/(7 days times 24 hours/day) ~= 1/4), making the requirements for 3 hours work into 12 hours of calories, or the equivalent of 400 W.

But that's for reading a book. Humans could easily spend months writing a book that size, so an AI model good enough to write 62,500 useful tokens could easily be (2 months * 2065 kcal/day = 144 kWh), at $0.1/kWh around $14.4, or $230/megatoken price range, and still more energy efficient than a human doing the same task.

I've not tried o3*, but I have tried o1, and I don't think o1 can write a book-sized artefact that's worth reading. But well architected code isn't a single monolith function with global state like a book can be, you can break everything down usefully and if one piece doesn't fit the style of the rest it isn't the end of the world, so it may be fine for code.

* I need to "verify my organisation", but also I'm a solo nerd right now, not an organisation… if they'd say I'm good, then that verification seems not very important?


I totally agree that the environmental cost SHOULD be pushed towards the creators, but as long as that doesn't happen, is the moral thing as a consumer to just carry on using it? This is not a rhetorical question.

Transporting something with a car using fossil fuel usually uses less energy than if a human did the same thing by hand, that doesnt mean fossil fuel is environmentally friendly. LLM:s does not decrease the population even if it can do human tasks. If the LLM is used for the good of humanity it is probably a win, but I mean obviously a lot of the use of AI is not.

I use LLM:s as well, I'm just saying, I dont think it is a totally strange question to ponder over the energy use of different use cases with LLM:s.


The same way you might reason about the climate impact of having a youtube video on in the background I expect.


I don’t have nearly a luxurious enough life for that to be a blip on my radar of concerns.


Likely how you reason about driving to the beach or flying to a vacation destination. Or playing a game in 4k high quality with ray tracing turned on.


Yeah, I try to avoid those things as much as possible. Eight years since I was last on an airplane. I'm not sure how that is relevant to my question?


It's a tough question and I do things the same way.

I feel like we are in awkward phase of: "We know this has severe environmental impact - but we need to know if these tools are actually going to be useful and worth adopting..." - so it seems like just keeping the environmental question at the forefront will be important as things progress.


I agree, we are totally in an exploratory phase. I think its always a little scary when so much money and big companies are involved though.


I don’t think about it at all. When they cost money we’ll care more about it.

Until then, the choice is being made by the entities funding all of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: