There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.
This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.
If you believe that there is nobody there inside all this LLM stuff, that it's ultimately hollow and yet that it'll still get used by the sort of people who'll look at most humans and call 'em non-player characters and meme at them, if you believe that you're looking at a collapse of civilization because of this hollowness and what it evokes in people… then you'll be doing that, but I can't blame anybody for engaging in attempts to prevent it.
You are stating a contradictory position: A person who doesn't believe AI can possibly emerge but is actively working to prevent it from emerging. I suggest that such a person is confused beyond help.
edit As an aside, you might want to read Don Quixote [1]
The difference between hype and reality is productivity—LLMs are productively used by hundreds of millions of people. Block chain is useful primarily in the imagination.
The industry consistently predicts people will do the task quicker with AI. The people who are doing the task predict they'll do it quicker if they can use AI. After doing the task with AI, they predict they did it quicker because they used AI. People who did it without AI predict they could have done it quicker with AI. But they actually measured how long it takes. It turns out, they do it slower if they use AI. This is damning.
It's a dopamine machine. It makes you feel good, but with no reality behind it and no work to achieve it. It's no different in this regard from (some) hard drugs. A rat with a lever wired to the pleasure center in its brain keeps pressing that lever until it dies of starvation.
(Yes, it's very surprising that you can create this effect without putting chemicals or electrodes in your brain. Social media achieved it first, though.)
And I don't see how most people are divided in two groups or appear to be.
Either it's total shit, or it's the holy cup of truth, here to solve all our problems.
It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail.
LLMs will NEVER become true AGI. But do they need to? No, or course not!
My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one.
But whatever. People like cults and anti-cults are cults too.
Your concern is the environmental impact? Why pick on LLMs vs Amazon or your local drug store? Or a local restaurant, for that matter?
Do the calculations for how much LLM use is required to equal one hamburger worth of CO2 — or the CO2 of commuting to work in a car.
If my daily LLM environmental impact is comparable to my lunch or going to work, it’s really hard to fault, IMO. They aren’t building data centers in the rainforest.
I broadly agree with your point, but would also draw attention to something I've observed:
> LLMs will NEVER become true AGI. But do they need to? No, or course not!
Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately.
Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem".
There are two different groups with different perspectives and relationships to the "AI hype"; I think we're talking in circles in this subthread because we're talking about different people.
> For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them.
> it's the environmental impact of MINDLESSLY using one.
Isn't much of that environmental impact currently from the training of the model rather than the usage?
Something you could arguably one day just stop doing if you're satisfied with the progress on that front (People won't be any time soon admittedly)
I'm no expert on this front. It's a genuine question based on what i've heard and read.
Overinvestment isn't a bug. It is a feature of capitalism. When the dust settles there'll be few trillion-dollar pots and 100s of billion are being spent to get one of them.
Environmental impacts of GenAI/LLM ecosystem are highly overrated.
"Stopping the tide coming in" is usually a reference to the English king Cnut (or 'Canute') who legendarily made his courtiers carry him to the sea:
> When he was at the height of his ascendancy, he ordered his chair to be placed on the sea-shore as the tide was coming in. Then he said to the rising tide, "You are subject to me, as the land on which I am sitting is mine, and no one has resisted my overlordship with impunity. I command you, therefore, not to rise on to my land, nor to presume to wet the clothing or limbs of your master." But the sea came up as usual, and disrespectfully drenched the king's feet and shins. So jumping back, the king cried, "Let all the world know that the power of kings is empty and worthless, and there is no king worthy of the name save Him by whose will heaven, earth and the sea obey eternal laws."
> They're preparing for it by blocking it off completely.
No we don't. Quite the opposite. Several dams have been made into movable mechanic contraptions precisely to NOT stop the tide coming in.
A lot of the water management is living with the water, not fighting it. Shore replenishment and strengthening is done by dropping sand in strategic locations and letting the water take care of dumping it in the right spot. Before big dredgers, the tide was used to flush sand out of harbours using big flushing basins. Big canals have been dug for better shipping. Big and small ships sailed and still sail on the waters to trade with the world. A lot of our riches come from the sea and the rivers.
The water is a danger and a tool. It's not stopped, only redirected and often put to good use. Throughout Dutch history, those who worked with the water generally have done well. And similarly, some places really suffered after the water was redirected away from them. Fisher folk lost their livelihoods, cities lost access to trade, some land literally evaporated when it got too dry, a lot of land shrunk when water was removed, biodiversity dropped...
Anyway, if you want to use the Dutch waters as a metaphor for technological innovations, the lesson will not be that the obvious answer is to block it. The lesson will be to accept it, to use it, to gain riches through it: to live with it.
The difference is that right now we're looking at a giant onrushing wave and we're considering maybe building a few dinghies to "ride it out".
Please understand. We're not in a position where we have sophisticated infrastructure to carefully control AI development. We have nothing, and the waves are getting bigger every few months.
You're in a position where you're safe enough (after centuries of labor!) that you can decide to not block some amount of incoming water. That is not where we are at with AI. There is no dike.
I understand that you're afraid. I'm not. But that's not what I was responding to. I was just pointing out that your comparison to the Dutch does not bolster your argument, but instead supports the opposite view.
I agree that what I said was literally false. I think the comparison to the Dutch still bolsters my view with the added context.
When you understand tides and local ecosystems and have flood level forecasting, you can choose to operate dikes in a way that allows tidal flow while blunting floods. However, we're currently in a position where in the analogy, we have no dike and people are arguing that dikes are impossible and anyway who's to say that the incoming flood won't be good for houses? In that situation, the first thing you need to do is get the incoming masses of water under control, and that's a thing that humans can do and it's the thing you did. (Unless I'm wrong?)
edit: Hang on, isn't Amsterdam below zero? How is that not blocking tidal flow effectively completely?
My point is just that tides are in the feasible range of human engineering, whether that's a good idea or not. Pragmatic management is not the same thing as unconditional surrender, which the other comment was advocating on basis of infeasibility, which is doubly wrong.
As things stand, 2 is impossible without 1. There simply is not enough time to figure out safe coexistence. These are not projects of equal difficulty- 1 is enormously easier than 2. And 1 is still a global effort!
You have no evidence for any of your claims (either for "impossibility" or degree of difficulty) and I strongly doubt your rationalization will stand the test of validation in reality.
You are also completely moving the goal posts. My original comment was about the hubris of man to prevent processes that operate at a scale beyond his means. The processes that are driving forward the march towards AI are beyond your ability to stop. And now you are arguing (again, with no evidence) the relative difficulty of slowing it down (a much weaker claim compared to stopping it) vs. contributing to safe co-existence.
But in the interest of finding some common ground let me point out: attempting to slow it down is actually getting on board to my project (although, in a way I think is ineffective). It starts with accepting that it can't be prevented and choosing a way to contribute to safe coexistence by providing enough time to figure it out.
You know, I think you have no evidence for any of your claims of "impossibility" either. And I'd argue there's a ton of counterevidence where man, completely ignoring how impossible that's supposed to be, effects change on a global scale.
You're comparing two dissimilar things. On the one hand slowing it down (which contrary to your claim that I'm moving the goalpost, is at sufficient investment effectively equal to stopping it), on the other, "contributing" to safe co-existence, which is trivially achieved by literally doing anything. I'm telling you that if we merely "contribute" to safe co-existence, we all die. The standard, and it really is the standard in any other field, is proving safe coexistence to a reasonable standard. Which should hopefully make clear where the difficulty lies: we have nothing. Even with all the interpretability research, and I'm not slagging interpretability, this field is in its absolute infancy.
"It can't be prevented" simply erases the most important distinction: if we get ASI tomorrow, we're in a fundamentally different position than if we get ASI in 50 years after a heroic global effort to work out safety, interpretability, guidance and morality.
> I'm telling you that if we merely "contribute" to safe co-existence, we all die.
I hear you. I believe you are wrong.
> it really is the standard in any other field, is proving safe coexistence to a reasonable standard
No it isn't. It often becomes the standard after the fact. But pretty much every invention by man didn't go through a committee. Can you provide some counter-examples? Did the Wright brothers prove flight was safe before they got on the first plane? Did the inventors of CRISPR technology "prove" it is safe? Or human cloning? Or nuclear fission? Your very argument rests on the mistakes humans made in the past and the out-sized consequences of making the same kinds of mistakes with AI. Your argument must be: we have to do things differently this time with AI because the stakes are higher.
These are old and boring arguments. I've been watching the less wrong space since it was overcoming bias (and frankly, from before). I've heard all of the arguments they have to make.
But the content of this discussion was on inevitability and how to respond to it. The person I replied to suggested that it was a mistake to see the future as something that happens to us. It was a call to agency. I was pointing out that not all agency is equal, and hubris can lead us to actions that are not productive.
It is also the case that fear, just like hubris, can lead us to actions that aren't productive. But perhaps we should just move on from this discussion.
Flight did not have potentially uncontrollable consequences.
> human cloning
No uncontrollable consequences.
> Nuclear fission
To a reasonable standard, yes! I remind you that there was a concern of atmospheric ignition that was reasonably disproven before the first test.
> CRISPR technology
Tbh they should have, and I fully advocate this standard for any sort of live genomic research as well.
Also, just fwiw. I am not scared of AI. I'm not even particularly scared of dying in a global armageddon (as the song says, "we will all go together when we go", and tbh that's genuinely a relief). I just think, fairly dispassionately, that it's going to happen. You can't explain your disagreements with "my opponents are just emotionally affected."
> Your argument must be: we have to do things differently this time with AI because the stakes are higher.
I don't understand what you're saying here. That is in fact my argument. My whole entire point is just that it's not something beyond our means by any means- we have to do it, and we're capable of doing it, so we should do it.
This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.