Do we really lack a good understanding of LLM's and deep nets that we need to be afraid of them? I would love to see this disproved with some open source work on the internals of these models and how they do inference and exactly reason.
And why they could possibly never realize an AGI with the current stream of models. Being able to display human level intelligence and creative in confined spaces (be it Chess or Go based models) is something we have been progressing on for a bit - now that the same is applied to writing, image or audio / speech generation we suddenly start developing a fear of AGI.
Is there a phrase for the fear of AI now building up?
Well... words and decades of circumstances. If you removed the circumstances (the religion, the conflict, the money, geography, etc) then the words would be absolutely hollow.
I think we tend to credit words where often circumstances are doing the heavy lifting. For example try to start a riot with words in Rodeo Drive. Now try to do it in Nanterre. Or better yet, try to start a riot in Nanterre before a 17 year was shot by police, vs. after.
You'll get a sense of just how valuable your words really are.
Quite so, which is why retrospective analysis like "The CIA helped start The Paris Review and that made literature friendly to neoliberal ideology" are confections of confirmation bias. Nothing is ever that pat. But tidy little conspiracies are also never the goal. A nudge is both all that is realistic to aim for and a few successes are all you need to shift public perception.
Arming every ambitious cult leader wannabe from some retrograde backwater with an information war WMD deserves some caution.
Reminds me of the idea of a "tipping point." When we hit this point, words can really get people moving. This has been true for big changes like revolutions and movements, like Black Lives Matter, #MeToo, or Fridays for Future.
Words might not do much without the right situation, like the parent mentioned with Rodeo Drive and Nanterre. But they're still important. They can guide people's anger and unhappiness.
In the case of Weimar Germany, the severe economic instability and social discontent following World War I created a fertile ground for radical ideologies to take root. When these conditions coincided with persuasive rhetoric, it catalyzed significant societal change. So, while words can indeed be powerful, they're often most effective when spoken into pre-existing circumstances of tension or dissatisfaction. They can then direct this latent energy towards a specific course of action or change.
That also can be modified with words though (but for both good and bad). Unfortunately, those with expertise in this domain may not have all of our best interests at heart.
> If you removed the circumstances (the religion, the conflict, the money, geography, etc) then the words would be absolutely hollow.
There's also the problem of non-religious faith based belief.
There were plenty of well-off people who flew to Syria to go behead other people.
Anyway this doesn't matter that much. Sure, you can imagine a world totally different from ours where there would be zero differential risk between a chess-playing computer and a language-speaking computer. But we live in this world, and the risk profile is not the same.
It’s interesting how confidently and obtusely people will proclaim categorical knowledge of the future.
It is a little disconcerting that there is a fight between two somewhat cultish sects when it comes to language models. Both sides call them “artificial intelligence”, one side says they’ll save the world, the other side says they’ll end it.
There is very little room to even question “Is this actually AI that we’re looking at?” when loudest voices on the subject are VC tech bros and a Harry Potter fan fiction author that’s convinced people that he is prescient.
Trying to impute the motives of ones interlocutor is dumb and boring. How about we discuss the first-order issue instead. Here's my argument for why x-risk is a real possibility:
The issue is that small misalignments in objectives can have outsized real-world effects. Optimizers are constrained by rules and computational resources. General intelligence allows an optimizer to find efficient solutions to computational problems, thus maximizing the utility of available computational resources. The rules constrain its behavior such that on net it ideally provides sufficient value to us above what it destroys. But misalignment in objectives provides an avenue by which the AGI can on net destroy value despite our best efforts. Can you be sure you can provide loophole-free objectives that ensures only value-producing behavior from the human perspective? Can you prove that the ratio of value created to value lost due to misalignment is always above some suitable threshold? Can you prove that the range of value destruction is bounded so that if it does go off the rails, its damage is limited? Until we do, x-risk should be the default assumption.
> Trying to impute the motives of ones interlocutor is dumb and boring.
I know right? You should see the response to my point that nobody has been convinced to fly a plane into a building by an LLM. “Dumb and boring” hits the nail on the head.
> Seeing how confidently and obtusely people dismiss the risks of AI
If or when AI is capable of doing all the great things people proclaim it to be able to do, then it will also be capable of doing immense harm, and we should be putting more work into mitigating that.
Like it really is that simple. AI generally, LLMs specifically, and certainly this crop of LLMs in particular might end up being inert pieces of technology. But to the precise extent that they are not inert, they carry risk.
That's a perfectly sensible position. The optimist position isn't even internally consistent. See Andreessen on Sam Harris's podcast: AI will produce consumer utopia and drive prices down. Also, there are no downside risks because AI will be legally neutered from doing much of anything.
Is it legally neutered or is it transformative? The skeptical case doesn't rely on answering this question: to the extent it's effective, powerful, and able to do good things in the world, it will also be effective, powerful, and able to do bad things in the world. The AI skeptics don't need to know which outcome the future holds.
> The AI skeptics don't need to know which outcome the future holds.
But they need to interpret a benign point about the undisputed fact that an LLM has never convinced anybody to fly a plane into a building as some sort of dangerous ignorance of risk that needs correcting.
Brilliant proof of exactly my point. When it comes to discussing any possible outcome other than utopia, suddenly the power of these tools drops to zero! Remarkable :)
You responded to my previous comment that called utopians as cultish as the fanfic folks.
When it comes up discussing any possible outcome that isn’t the opinion of [cult x], the only reason that the other person disagrees is because they are in [cult y]
What? I never proposed that dichotomy and don't believe in it. You did and you're tripping over it a bit. You can just discard that model and engage with the substance of the topic, you know!
The substance of this discussion is that a language model has never convinced anyone to crash a plane into a building and your position is that pointing out that fact is to ignore the material reality of the possibility of some theoretically advanced technology (which could maybe be adjacent to language models) at some point convincing a hypothetical person to crash a plane into a building.
That is to say, the substance here is an emotional doomsday reaction to a benign statement of undisputed fact — a language model has never convinced a person to crash a plane into a building.
Watching a cell divide into two cells: “Why isn’t anyone talking about the possibility that this is Hitler??”
Why aren't you mentioning that an LLM has already convinced someone to commit suicide?
Why are you being deliberately obtuse about the unknowable-ness of the exponential curve the enthusiasts on one hand salivate over, and on the other confidently assert optimism just as you're doing?
And why they could possibly never realize an AGI with the current stream of models. Being able to display human level intelligence and creative in confined spaces (be it Chess or Go based models) is something we have been progressing on for a bit - now that the same is applied to writing, image or audio / speech generation we suddenly start developing a fear of AGI.
Is there a phrase for the fear of AI now building up?