> Weather forecasts have always been an inexact science
Weather forecasting is anything but "an inexact science." It's extremely exact up to the limitations and assumptions you impose on your model due to resource constraints.
And yes - I assume that this is what you mean by "an inexact science." But still in 2026 I regularly meet people who think that weather forecasting is the same as astrology, completely ignorant of massive amount of physical scientific understanding that goes into it.
> Weather forecasting is anything but "an inexact science." It's extremely exact up to the limitations and assumptions you impose on your model due to resource constraints.
It's "extremely exact" but our models are not good enough. So... inexact?
The reality is that we don't have the technology to model the physical world with extreme accuracy. If we did, we would be able to predict the future, and not just weather events. The world's most powerful supercomputers can model atmospheric conditions pretty well, and they've certainly improved over time, but there are still a lot of variables unaccounted for.
This is why I think that ~90% accuracy for a few days in advance[1] is good enough for most people. A smartphone app won't miraculously make this better, no matter how pretty or "fun" it is.
> It's "extremely exact" but our models are not good enough. So... inexact?
That's not the common way that the phrase "inexact science" is used. All modeling involves approximations at some levels, but you wouldn't turn around and call it "inexact science."
> ... but there are still a lot of variables unaccounted for.
Such as... ?
This is the problem with throwing away colloquialisms like "inexact science." What, specifically, is a "variable" that is unaccounted for that would unlock improved forecast accuracy or to push thresholds closer to the predictability limits?
> This is why I think that ~90% accuracy for a few days in advance[1] is good enough for most people. A smartphone app won't miraculously make this better, no matter how pretty or "fun" it is.
I agree, which is why the other portions of your comment come off poorly.
> No. But I'd suspect a tabula rasa approach to weather–particularly given it hasn't been rolled out globally in one go–incorporates satellite data, local measurements, et cetera.
There most likely won't ever be such an effort - even in companies that are targeting verticalization of the "weather supply chain" (proprietary observations + models + decision support tools) - if only because it would be utterly foolish to exclude the vast amounts of data collected by government agencies and the wide variety of players in the weather enterprise. At best, verticalized weather companies can produce niche value over baseline from the single modality of proprietary data they collect.
The infrastructure for observing and forecasting the weather is incredibly sophisticated, and has been evolving for about 150 years at this point. The quality of contemporary numerical weather prediction likely doesn't leave much headroom towards the threshold of fundamental physical limitations on predictability. This is why there are groans and eye rolls from the weather community each time a new player steps forward with yet-another-AI-model-trained-on-ERA5-reanalysis and boasts some comically small improvement in average forecast skill.
With all that being said, there's likely an exciting frontier opening up as the AI models push towards encompassing data assimilation. But the applications that start to become extremely interesting there won't have any noticeable impact on average forecast quality for your typical weather app.
> it would be utterly foolish to exclude the vast amounts of data collected by government agencies
Never suggested this. You use the government data. And you supplement with specialist sources. If you’re near any avalanche areas, for example, your snow forecasts typically have an additional layer of resolution available if you know where to look.
> Do any feel-like estimates take cloud cover into consideration?
No, usually not, because they're usually just simple toys combining a heat index and wind chill scale.
There _is_ an official metric used for estimating heat stress that accounts for cloud cover - the Wet-bulb Globe Temperature (e.g. https://www.weather.gov/tsa/wbgt). This is what is used, for instance, in literature analyzing the impact that future climate change might have on heat stress and mortality risk during heat waves. It's also used by some professional sports programs to monitor risk for crowds and athletes, as well as commonly used by OSHA and other regulatory agencies looking at worker exposure to heat hazards.
It would just be regurgitating information from numerical forecast models.
If you're in the Northeast and have questions about the significant winter storm that is impending, please check out the National Weather Service's forecast and decision support materials for your community on www.weather.gov.
He has a knack for being scarily prescient. I didn't expect we would seriously be discussing geoengineering in the 2020's (I gave it until at least the early 2030's, given the technical complexity of building the actual delivery system for any planet-scale intervention), but here we are.
> Wait... So, to undo it all we have to do is stop doing it? Doesn't this contradict the statement right before it?
It's not quite that simple.
The intuition that you're subtly relying on is the idea that the response or effect of one of these geoengineering treatments is linear. But unfortunately, that's not something you can assume about a dynamical system. In reality, the climate system can undergo certain types of hysteresis where "undoing" the forcing doesn't revert the initial perturbation, because you're suddenly on a different response curve. Probably the most famous example of this in climate dynamics is the way that the ice-albedo effect sets up a hysteresis in the trajectory towards a "snowball Earth" scenario. Apologies for the lack of links/references; Wiki has decent write-ups on this, and it's typically covered in the first chapter of a climate dynamics textbook.
The potential response to suddenly stopping a climate change mitigation strategy has a very well-popularized name: a "termination shock." In fact, Neal Stephenson used exactly this concept in his titular novel on the topic in 2021.
As a climate scientist, my mental model to better understand the risk of termination shocks and unintended consequences boils down to how fast the response of the climate system is. Marine brightening is "less risky" because the meteorological response to these interventions is extremely fast; a cloud-precipitation system will respond on the order of minutes to hours, and unless the intervention continues unabated, it will clean the air quickly, limiting the repsonse. Stratospheric aerosol injection is more complicated, but we have a very good analogue - very large scale volcanic eruptions like Mt Pinatubo. The response to these sorts of events is measured more on the timescale of 2-5 years, although knock-on effects (such as a shift towards more diffuse solar radiation reaching the surface, which has significant effects on terrestrial and oceanic biogeochemistry) could very much persist longer than that - and don't "snap back" nearly as quickly. A continuous, Pinatubo-like intervention would compound and introduce coupled atmosphere/ocean responses that could decade years or longer to fully play out. And that's _in addition_ to the near immediate (1-2 year) response in global average temperature, which would bounce back to most of the pre-intervention level very quickly.
These things are complex. There's a lot we don't know. But, there's also a lot we _do_ know. I would encourage anyone who does not have significant experience in climate dynamics to remain curious and avoid jumping to conclusions based on simple intuition; they're probably wrong.
Thank you for this response. Of those that replied to me, yours seems the most balanced and scientific, and I learned the most from. I wish more often people engaged on HN like you have here.
Given your expertise in this, I'm curious what your take is on CO2 capture, not in terms of economic viability, but in terms of climate risk...
For example, if we were to discover a process that removed CO2 from atmosphere and converted it into a product profitably such that there was an economic incentive/positive feedback loop to remove CO2.
My intuition is that if we removed the CO2 too quickly or too much of it we may have unwanted consequences, but if the rate was managed and we slowed down and stopped at a certain equilibrium, would this be a theoretically ideal way to address the problem?
First, what is "too quickly" with reference to CO2 removal from the atmosphere? At present, human civilization emits over 40 gigatons - or 40 trillion kilograms - of CO2 per year. And that increases the atmospheric burden by about 2.5 parts per million per year. So today, before you even start _reducing_ atmospheric CO2, you need to be sucking down at least 40 trillion kilograms of CO2. I struggle to imagine a scenario outside of total science fiction where that would be remotely possible.
Second, the equilibrium climate response to changes in greenhouse gas forcing take on the order of decades or centuries to realize. This is because the dynamics of the climate system are heavily buffered. For example, the ocean acts as a giant heat capacitor that slowly interchanges with the atmosphere. Were you to instantaneously halve the CO2 in the atmosphere, you'd likely see a pretty classic exponential decay in global average temperature (and other more nuanced responses); in the present climate, it's not clear we have already passed specific "tipping points" that would induce that hysteresis I described in the previous comment (in fact - one could read "climate tipping point" as a synonym for dynamical system hysteresis). Theoretically, one could try to "dial in" some particular equilibrium climate state, but it's not obvious over what timescale you'd have to intervene and what the cost of such an intervention would be.
The cool thing is none of this needs to be purely "theoretical." You could simulate all of this _today_ if you had a setup to run a global climate model. A "4X CO2" experiment where you branch from an equilibrium spin-up climate state and immediately apply a global quadrupling of CO2 has been a completely standard experiment as part of CMIP for over two decades. The opposite experiment is an established protocol for both the Carbon Dioxide Removal Intercomparison Project [1], which features an annual ramp down of CO2 at a 1% per year rate, and the Cloud Feedback Model Intercomparison Project [2], which features a more direct counterpart, with an abrupt decrease of atmospheric CO2 by 50%. There is a large body of literature discussing the results of these classes of experiments, but this is outside of my primary research focus and I can't turn you to particularly good papers off-hand. But they're easy enough to find.
Well, the problem is that what we would need to geoengineer the climate would be equivalent to a continuous, yearly sequence of large volcanic eruptions. So the analogy starts to breakdown, because the handful of examples we have of these sorts of periods with high volcanic activity were actually pretty bad for civilization at the time:
1. 530's-540's Cluster - contemporaneous historical notes over both the far East and Western civilizations clearly illustrate widespread famine due to crop failures, most likely due to the cooling that this period sustained (sometimes called the "Little Antique Ice Age"). The famous Plague of Justinian also occurred in this period, and was likely exacerbated by famine. There's also the Norse "Fimbulwinter" mythos - a period preceding Gotterdamurang - likely inspired by this period.
2. 1250's-1280's Cluster - Suspected to have triggered the "Little Ice Age", and triggered contemporaneous crop failures in both South America and Europe. 1258 is known as one of the "Years Without A Summer."
3. 1808-1815 Tambora Cluster - Culprit behind the even more well-known "Year Without a Summer" in 1816, which produced one the more recent great famines in Western Europe in Switzerland. Agriculture-induced famines led to a wave of civil unrest across Europe.
So yeah - we obviously survived these periods. But I wouldn't exactly cite them as endorsements for any sort of geoengineering activity analogous to vulcanism.
At least those show that a stratospheric injection doesn't persist for too long. 200 years of heightened volcanic activity was certainly a problem that eventually resolved itself.
The important, missing detail that breaks down this analogy is that we don't have a reference for a long period of vulcanism while anthropogenic emissions of greenhouse gases continue.
This is where the "termination shock" issue comes in. Given current CO2 emission rates, a 50 year geoengineering strategy would mask an additional 100-125 ppm of CO2 added to the atmosphere. If the geoengineering scheme was suddenly stopped, it's not entirely obvious what the response trajectory would be of the climate system.
Yeah, stratospheric seeding is mostly a stopgap measure that needs to be used in conjunction with other geoengineering and political projects that strive to reduce/reverse CO2 emissions.
And therein arises the "moral hazard" issue. There is a legitimate concern that geoengineering could abate some of the concern over climate change and lead to further delays to reduce GHG emissions. And this is a serious problem because while we might mask global temperature change with these Approaches, they don't help resolve issues like ocean acidification.
Which tuition are you referring to? Nameplate tuition is like the sticker price on a new car; few to no people pay it. Net tuition is the number that actually matters, and it's been largely flat the last 8 years.
I don't know the figures for large universities, but at the small liberal arts college I graduated from and the one I've worked at for the last 15 years, the average figure for "full pay" students—which, as the name suggests, is the students who pay, or whose families pay, the full sticker price, either directly or through loans—has generally been between 46% and 53%.
Now, if you have figures showing that what you claim is true on the whole across all of US higher education, please, by all means, post the links. I'm genuinely interested to know just how different it is with the larger universities.
So you're saying academics use the same opaque market practices as, e.g. health insurance? Yeah all the more reasons to cut funding. If they have nothing to hide they have nothing to fear with transparency.
You seem to have no interest in transparance or understanding, but answer everything with "cut the universities" no matter what.
If differential pricing based on ability to pay is a reason to destroy something, then we had better destroy 90% of B2B. But it's not a reason, you're just parroting the same desired end result no matter what is actually said about universities.
Weather forecasting is anything but "an inexact science." It's extremely exact up to the limitations and assumptions you impose on your model due to resource constraints.
And yes - I assume that this is what you mean by "an inexact science." But still in 2026 I regularly meet people who think that weather forecasting is the same as astrology, completely ignorant of massive amount of physical scientific understanding that goes into it.