That's an argument from authority fallacy. It doesn't matter how many citations you have, you either have the arguments for your position or you do not have them. In this particular context, ML as a field looked completely different even few years ago and the most cited people were able to come up with new architectures, training regimes, loss functions, etc. But those things does not inform you about societal dangers of the technology. Car mechanics can't solve your car-centric urbanism or traffic jams.
In many ways, we're effectively discussing the accuracy by which the engineers of the Gutenburg printing press are able to predict the future of literature.
Right, but the question is if the engineers who intimately understood the function of the press itself were the experts that should have been looked to in predicting the sociopolitical impacts of the machine and the ways in which it would transform the media with which it was engaged.
I'm not always that impressed by the discussion of AI or LLMs by engineers who undisputably have great things to say about the operation when they step outside their lane in predicting broader impacts or how the recursive content refinement is going to manifest over the next decade.
the question is if the machine will explode, not societal impacts, that's where the miscommunication is. Existential risks are not societal impacts, they are detonation probability.
That's exactly what I am saying. Since humanity has to bet the lives of our children on something very new and unpredictable, I would bet mine on the top 3 scientists and not your opinion. Sorry.they must be definition make better predictions than you and me.
Would you bet that Oppenheimer would have by definition made better predictions how the bomb was going to change the future of war than someone who understood the summary of the bomb's technical effects from the scientists but also had studied and researched geopolitical diplomacy changes resulting from advances in war technologies?
There's more to predicting the impact and evolution of technology than simply the mechanics of how it is being built today and will be built tomorrow (the area of expertise where they are more likely to be accurate).
And keep in mind that Hinton's alarm was sparked by the fact he was wrong about how the technology was developing, seeing a LLM explain a joke it had never seen before - a capability he specifically hadn't thought it would develop. So it was his failure to successfully predict how the tech would develop that caused him to go warning about how the tech might develop.
Maybe we should be taking those warnings with the grain of salt they are due coming from experts who were broadly wrong about what was going to be possible in the near future, let alone the far future. It took everyone by surprise - so there was no shame in being wrong. But these aren't exactly AI prophets with a stellar track record of prediction even if they have a stellar track record of research and development.
We disagree on what the question is. If we are talking about if an atomic bomb could erode the atmosphere I would ask Oppenheimer and not a politician or sociologist. If we don't agree on the nature of the question it's impossible to have discourse. It seems to me that you are confusing x-risk with societal downsides. I , and they, are talking about extinction risks. Has nothing to do with society. Arms,bioweapons and hacking have nothing to to with sociologists.
And how do you think extinction risk for AI can come about? In a self-contained bubble?
The idea that AGI poses an extinction risk like the notion of a chain atomic reaction igniting the atmosphere as opposed to posing risk more like multiple nation states pointing nukes at each other in a chain reaction of retaliation is borderline laughable.
The only way in which AGI poses risk is in its interactions with other systems and infrastructure, which is where knowledge of how an AGI is built is far less relevant than other sources of knowledge.
In an air gapped system no one interacts with AGI existing can and will never bring about any harm at all, and I would seriously doubt any self-respecting scientist would argue differently.
There are extremely many books and articles about the subject.its like me asking " wtf gravity bends time? That's ridiculous lol". But science doesn't work that way. If you want you can read the bibliography. If not, you can argue like this.
The printing press's impact was in ending the Catholic Church's monopoly over information, and thereby "the truth". It took 400 years for that process to take place.
The Gutenberg Era lasted all the way from its invention to (I'd say) the proliferation of radio stations.
Yes very good! All the more so that today's is a machine that potentially gains its own autonomy - that is, has a say in its and our future. All the more so that this autonomy is quite likely not human in its thinking.
Right. We should develop all arguments from commonly agreed, basic principles in every discussion. Or you could accept that some of these people have a better understanding, did put forth some arguments, and that it's your turn to rebuke those arguments, or point at arguments which do. Otherwise, you'll have to find somebody to trust.
it's not about societal changes. It's about calculating risk of invention and let me give you an example:
Who do you think can better estimate the risk of engine fire in a Red Bull F1: the chief engineer or Max the driver? It is obviously the creator. And we are talking about invention safety here. VCs and other "tech gurus" cannot comprehend exactly how the system works. Actually the problem is that they think they know how it works when the people that created say there is no way of us knowing and they are black boxes.
But Bayesian priors also have to be adjusted when you know there's a profit motive. With a lot of money at stake, the people seeing $$$ from AI have an incentive to develop, focus on, and advance low-risk arguments. No argument is total; what aspects are they cherry-picking?
I trust AI VCs to make good arguments less than AI researchers.
The potential miscalculation is thinking deep neural nets will scale to AGI. There are also a lot of misnomers in the area, even the term "AI", is claiming systems are intelligent, but that word implies intelligibility or human level understanding, which it is nowhere near as evidence by the existence of prompt engineering (which would not be needed otherwise). AI is ripe with overloaded terminology that prematurely anthropomorphizes what are basically smart tools, which are smart thanks to the brute-forcing power of modern GPUs.
It is good to get ahead of the curve, but there is also a lot of hype and overloaded terminology that is fueling the fear.
Why couldn't deep neural nets scale to AGI. What is fundamentally impossible for neural nets + tooling to accomplish the suite of tasks we consider AGI?
Also prompt engineering works for human too. It's called rhetoric, writing, persuasion, etc. Just because the intelligence of LLM's is different than humans doesn't mean it isn't a form of intelligence.
Speaking as a former cognitive neuroscientist, our current NN models are large, but simpler in design relative to biological brains. I personally suspect that matters, and that AI researchers will need more heterogeneous designs to make that qualitative leap.
Hinton's done work on neural nets that are more similar to human brains and no far it's been a waste of compute. Multiplying matrices is more efficient than a physics simulation that stalls out all the pipelines.
Fair point. I guess nobody knows yet, and it's also worth a shot. In the context of AI alignment, I don't see strong evidence to suggest deep neural nets, transformers, and LLMs have any of the fundamental features of intelligence that even small mammals like rats have. ChatGPT was trained on data that would take a human several lifetimes to learn, yet it still makes some rudimentary mistakes.
I don't think more data would suddenly manifest all the nuance of human intelligence... that said we could just be one breakthrough away from discovering some principle or architecture, although I think that will be a theory first and not so much a large system that suddenly wakes up and has agency.
certainly there is no need whatsoever for AGI to exist in order for an autonomous agent with alien/inhuman intelligence or narrow capabilities to turn our world upside down
That's true, although it's much less substantiated because we have no idea what form that would take and how much effort is needed to get there. So far we have no evidence that LLMs, transformers or any other NN architecture have some of the fundamental features of human intelligence. From my own perspective, one of the first features we will need to see to be on the right track is self-supervised learning (with very minimal initial conditions). This seems inherent to all mammals yet the best AI so far requires huge amounts of data to even get coherent outputs.
Google Scholars numbers are horrible and should never be relied on. It is extremely easy game its citation numbers, and its crawls too wide and find to many false positive citations.
Judging someone by how many citations they have is like saying The Pope is right about everything because millions of cardinals and priests will refer to his proclamations in their masses.
There seems to be soooo much background feeling that the official large companies of AI are in control of things. Or should be. Or can be.
Can they? When plenty of groups are more interesting in exploiting the advances? When plenty of hackers will first try to work around the constraints - before even using the system as-is? When it's difficult even to define what AGI might test like or look like? When it might depend on a small detail or helper?
Did you accidentally click on a different article? It literally uses the word "cult" five times and does not demonstrate any knowledge whatsoever of the main arguments around AGI danger and AGI alignment.
That's Hinton, Bengio and Sutskever.
Their voices should have a heavier weight than Andressen and other irrelevant with AI VCs with vested interests.