Progress in information systems cannot be compared to progress in physical systems.
For starters, physical systems compete for limited resources and labor.
For another, progress in software vastly reduces the cost of improved designs. Whereas progress in physical systems can enable but still increase the cost of improved designs.
Finally, the underlying substrate of software is digital hardware, which has been improving in both capabilities and economics exponentially for almost 100 years.
Looking at information systems as far back as the first coordination of differentiating cells to human civilization is one of exponential improvement. Very slow, slow, fast, very fast. (Can even take this further, to first metabolic cycles, cells, multi-purpose genes, modular development genes, etc. Life is the reproduction of physical systems via information systems.)
Same with human technological information systems, from cave painting, writing, printing, telegraph, phone, internet, etc.
It would be VERY surprising if AI somehow managed to fall off the exponential information system growth path. Not industry level surprising, but "everything we know about how useful information compounds" level surprising.
> Looking at information systems as far back as the first coordination of differentiating cells to human civilization is one of exponential improvement.
Under what metric? Most of the things you mention don't have numerical values to plot on a curve. It's a vibe exponential, at best.
Life and humans have become better and better at extracting available resources and energy, but there's a clear limit to that (100%) and the distribution of these things in the universe is a given, not something we control. You don't run information systems off empty space.
Life has been on Earth about 3.5-3.8 billion years.
Break that into 0.5-0.8, 1 billion, 1 billion, 1 billion "quarters", and you will find exponential increases in evolutions rate of change and production of diversity across them by many many objective measures.
Now break up the last 1 billion into 100 million year segments. Again exponential.
Then break up the last 100 million into segments. Again.
Then the last 10 million years into segments, and watch humans progress.
The last million, in 100k year segments, watch modern humans appear.
the last 10k years into segments, watch agriculture, civilizations, technology, writing ...
The last 1000 years, incredible aggregation of technology, math, and the appearance of formal science
last 100 years, gets crazy. Information systems appear in labs, then become ubiquitous.
last 10 years, major changes, AI starts having mainstream impact
last 1 year - even the basic improvements to AI models in the last 12 months are an unprecedented level of change, per time, looking back.
I am not sure how any of could appear "vibe", given any historical and situational awareness.
This progression is universally recognized. Aside from creationists and similar contingents.
The progression is much less clear when you don't view it anthropocentrically. For instance, we see an explosion in intelligible information: information that is formatted in human language or human-made formats. But this is concomitant with a crash in natural spaces and biodiversity, and nothing we make is as information-rich as natural environments, so from a global perspective, what we have is actually an information crash. Or hell, take something like agriculture. Cultured environments are far, far simpler than wild ones. Again: an information crash.
I'm not saying anything about the future, mind you. Just that if we manage to stop sniffing our own farts for a damn second and look at it from the outside, current human civilization is a regression on several metrics. We didn't achieve dominion over nature by being more subtle or complex than it. We achieved that by smashing nature with a metaphorical club and building upon its ruins. Sure, it's impressive. But it's also brutish. Intelligence requires intelligible environments to function, and that is almost invariably done at the expense of complexity and diversity. Do not confuse success for sophistication.
> last 1 year - even the basic improvements to AI models in the last 12 months are an unprecedented level of change, per time, looking back.
Are they? What changed, exactly? What improvements in, say, standards of living? In the rate of resource exploitation? In energy efficiency? What delta in our dominion over Earth? I'll tell you what I think: I think we're making tremendous progress in simulating aspects of humanity that don't matter nearly as much as we think they do. The Internet, smartphones, AI, speak to our brains in an incredible way. Almost like it was by design. However, they matter far more to humans within humanity than they do in the relationship of humanity with the rest of the universe. Unlike, say, agriculture or coal, which positively defaced the planet. Could we leverage AI to unlock fusion energy or other things that actually matter, just so we can cook the rest of the Earth with it? Perhaps! But let's not count our chickens before they hatch. As of right now, in the grand scheme of things, AI doesn't matter. Except, of course, in the currency of vibes.
I am curious when you think we will run out of atoms to make information systems.
How many billions of years you think that might take.
Of all the things to be limited by, that doesn't seem like a near term issue. Just an asteroid or two alone will provide resources beyond our dreams. And space travel is improving at a very rapid rate.
In the meantime, in terms of efficiency of using Earth atoms for information processing, there is still a lot space at the "bottom", as Feynman said. Our crude systems are limited today by their power waste. Small energy efficient systems, and more efficient heat shedding, will enable full 3D chips ("cubes"?) and vastly higher density of packing those.
The known limit for information for physical systems per gram, is astronomical:
• Bremermann’s limit : 10^47 operations per second, per gram.
Other interesting limits:
• Margolus–Levitin bound - on quantum state evolution
• Landauer’s principle - Thermodynamic cost of erasing (overwriting) one bit.
• Bekenstein bound: Maximum storage by volume.
Life will go through many many singularities before we get anywhere near hard limits.
By physical systems, I meant systems whose purpose is to do physical work. Mechanical things. Gears. Struts.
Computer hardware is an information system. You are correct that it is has a physical component. But its power comes from its organization (information) not its mass, weight, etc.
Transistors get more powerful, not less, when made from less matter.
Information systems move from substrate to more efficient substrate. They are not their substrate.
They still depend on physical resources and labor. They’re made by people and machines. There’s never been more resources going into information systems than right now, and AI accelerated that greatly. Think of all the server farms being built next to power plants.
> The amount of both matter and labor per quantity of computing power is dropping exponentially. Right?
Right. The problem is the demand is increasing exponentially.
It’s not like when computers got 1000x more powerful we were able to get by with 1/1000x of them. Quite the opposite (or inverse, to be more precise).
Just to go back to my original point, I think drawing a comparison that physical systems compete for physical resources and implying information systems don’t is misleading at best. It’s especially obvious right now with all the competition for compute going on.
>[..] to first metabolic cycles, cells, multi-purpose genes, modular development genes, etc.
One example is when cells discovered energy production using mitochondria. Mitochondria add new capabilities to the cell, with (almost) no downside like: weight, temperature-sensitivity, pressure-sensitivity. It's almost 100% upside.
If someone tried to predict the future number of mitochondria-enabled cells from the first one, he could be off by 10^20 less cells.
I am writing a story the last 20 days, with that exact story plot, have to get my stuff together and finish it.
That's fallacious reasoning, you are extrapolating from survivorship bias. A lot of technologies, genes, or species have failed along the way. You are also subjectively attributing progression as improvements, which is problematic as well, if you speak about general trends. Evolution selects for adaptation not innovation. We use the theory of evolution to explain the emergence of complexity, but that's not the sole direction and there are many examples where species evolved towards simplicity (again).
Resource expense alone could be the end of AI. You may look up historic island populations, where technological demands (e.g. timber) usually led to extinction by resource exhaustion and consequent ecosystem collapse (e.g. deforestation leading to soil erosion).
Doesn't answer the core fallacy. Historical "technological progress" can't be used as argument for any particular technology. Right now, if we are talking about AI, we're talking about specific technologies, which may just as well fail and remain inconsequential in the grand scheme of things, like most technologies, most things really, did in the past. Even more so since we don't understand much anything in either human or artificial cognition. Again and again, we've been wrong about predicting the limits and challenges in computation.
You see, your argument is just bad. You are merely guessing like everyone else.
Information technology does not operate by the rules of any other technology. It is a technology of math and organization, not particular materials.
The unique value of information technology is that it compounds the value of other information and technology, including its own, and lowers the bar for its own further progress.
And we know with absolute certainty we have barely scratched the computing capacity of matter. Bremermann’s limit : 10^47 operations per second, per gram. See my other comment for other relevant limits.
Do you also expect a wall in mathematics?
And yes, an unbroken historical record of 4.5 billions years of information systems becoming more sophisticated with an exponential speed increase over time, is in fact a very strong argument. Changes that took a billion years initially, now happen in very short times in today's evolution, and essentially instantly in technological time. The path is long, with significant acceleration milestones at whatever scale of time you want to look at.
Your argument, on the other hand, is indistinguishable from cynical AI opinions going back decades. It could be made any time. Zero new insight. Zero predictive capacity.
Substantive negative arguments about AI progress have been made. See "Perceptrons" by Marvin Minksy and Seymour Papert, for an example of what a solid negative argument looks like. It delivered insights. It made some sense at the time.
> Your argument, on the other hand, is indistinguishable from cynical AI opinions going back decades. It could be made any time. Zero new insight. Zero predictive capacity.
> Historical "technological progress" can't be used as argument for any particular technology.
Historical for billions of years of natural information system evolution. Metabolic, RNA, DNA, protein networks, epigenetic, intracellular, intercellular, active membrane, nerve precursors, peptides, hormonal, neural, ganglion, nerve nets, brains.
Thousands of years of human information systems. Hundreds of years of technological information systems. Decades of digital information systems. Now in in just the last few years, progress year to year is unlike any seen before.
Significant innovations being reported virtually every day.
Yes track records carry weight. Especially with no good reason for any reason for a break, while every tangible reason to believe nothing is slowing down, right up to today.
"Past is not a predictor of future behavior" is about asset gains relative to asset prices in markets where predictable gains have had their profitability removed by the predictive pricing of others. A highly specific feedback situation making predicting asset gains less predictable even when companies do maintain strong predictable trends in fundamentals.
It is a narrow specific second order effect.
It is the worst possible argument for anything outside of those special conditions.
Every single thing you have ever learned was predicated on the past having strong predictive qualities.
You should understand what an argument means, before throwing it into contexts where its preconditions don't exist.
> Right now, if we are talking about AI, we're talking about specific technologies, which may just as well fail and remain inconsequential in the grand scheme of things, like most technologies, most things really, did in the past. Even more so since we don't understand much anything in either human or artificial cognition. Again and again, we've been wrong about predicting the limits and challenges in computation.
> Your argument [...] is indistinguishable from cynical AI opinions going back decades. It could be made any time. Zero new insight. Zero predictive capacity.
If I need to be clearer, nobody could know when you wrote that by reading it. It isn't an argument it's a free floating opinion. And you have not made it more relevant today, than it would have been all the decades up till now, through all the technological transitions up until now. Your opinion was equally "applicable", and no less wrong.
This is what "Zero new insight. Zero predictive capacity" refers to.
> Substantive negative arguments about AI progress have been made. See "Perceptrons" by Marvin Minksy and Seymour Papert, for an example of what a solid negative argument looks like. It delivered insights. It made some sense at the time.
For starters, physical systems compete for limited resources and labor.
For another, progress in software vastly reduces the cost of improved designs. Whereas progress in physical systems can enable but still increase the cost of improved designs.
Finally, the underlying substrate of software is digital hardware, which has been improving in both capabilities and economics exponentially for almost 100 years.
Looking at information systems as far back as the first coordination of differentiating cells to human civilization is one of exponential improvement. Very slow, slow, fast, very fast. (Can even take this further, to first metabolic cycles, cells, multi-purpose genes, modular development genes, etc. Life is the reproduction of physical systems via information systems.)
Same with human technological information systems, from cave painting, writing, printing, telegraph, phone, internet, etc.
It would be VERY surprising if AI somehow managed to fall off the exponential information system growth path. Not industry level surprising, but "everything we know about how useful information compounds" level surprising.