I somehow saw the README just now , it touted that it may perform actions that one may not want without one's authorization. A literal attempt at an uncontrolled chaotic AI.
The langchain/auto gpt tool chains being built have bugs/issues.
Hallucination, inconsistent output, straight inability to act on certain aspects.
At a meta level - AGI unleashed is only AGI unleashed into our messy real world of fraud, inaccurate data, and now - massive amounts of content made by other LLMs.
Been involved with LLMs for the past 2 years, and what I can say is we have no idea what the technology can do, and we have no way of monitoring when it gains new abilities. We're racing towards autonomous everything, and we're too slow and blind to even detect hidden exponential developments.
Question , what do we do if/when it gets to that point?
Tech keeps advancing, most people just seem to say, “it’s not there yet”, the entire point of the tech industry is now moving to get us “there” with absolutely no idea what the consequences are. I don’t find this intelligent at all, ironically ?
I like the idea of progress but stating to feel like enough is enough without a at least some clear idea about where we want it to end. I really really don’t want to see terminators in my lifetime anymore than I want to see human cloning, which is banned.
This IMO is the point where tech starts to go from cool and helpful to potential sci-fi disaster.
How would it get to that point? There's no connection between having an internet connection and ending the human race.
These are all sci-fi stories that come with unexamined assumptions that something "smart" and "optimal" is going to be invented that's so good at its job that you can ask it to do X and it's going to do completely impossible thing Y without running out of AWS credits first.
(I personally think that humans are not "optimal" and that an AGI will also not be "optimal" or else it wouldn't be "general". More importantly, I don't think AGIs are going to be great at their jobs just because they have computer brains, and this is clearly an old SF movie trope.)
This is such an amazingly shortsighted, naive, and flawed way of thinking I'm having a really hard time not sniping.
A large number of people are very concerned about this, and rightfully so because so many people don't get the risks (including yourself it seems).
These people largely aren't fearmongers either, they are experts in that field, serious engineers.
A computer can't do this you say... and that is how it has always been right up until it can, and then whole ecosystems shift seemingly overnight. This will be no different.
Let me ask you, where's the risk management? You deal with anything dangerous you've dealt with risk management. Where is it for this? Can we even evaluate a problem like this? Our main form of interaction is by code, we work in seconds it ticks in nanoseconds, by the time it receives input from us, it could have predicted and nullified our attempts to do anything if it were sentient.
Right now, its very simple, there is almost no risk management, and you and the smartest people in the world trying to tackle this problem are clawing in the dark blind, and you don't know it, but they do and the ones with true intelligence are scared shitless which is why there are so many people going on record (a thing normally that would be a career killer), trying to prevent you from driving everyone over that proverbial cliff, only its more like a dam.
For you and most other people that don't work with this stuff, its an out of context problem that will never happen, and that's fine for small things that don't cascade.
People are traditionally very bad at recognizing cascading failures before it actually happens. This is like a dam with a crack running through it that almost no one has noticed, and your home is right underneath it, in this case everyone's home is underneath it.
What could possibly go wrong with giving someone, really anyone, who doesn't recognize the risks the ability to potentially end everything if the digital dice line up just right.
Literally Everything is networked. Globally.
It doesn't even need to be Battlestar Galactica type apocalypses, though that's fairly realistic pilot about how it might go down if it became sentient. It can also do it without even being sentient by the slow Ayn Rand/John Galt route where societal mechanics do the majority of the work, all you need to do is disrupt the economic cycle between factor and non-factor markets to a sufficient degree, and people will do the rest, plenty of examples where we were able to restart in the historic record, what about those dark areas for which we have no history; without modern technology we can't grow enough food to feed half the worlds current population.
When the stakes are this high, and the risk management is so nonexistent; everyone including policy makers should be scared shitless and do something about it. If you look at things like how the Manhattan project were handled, they were done with more risk management and care for the amount of destruction potential than either bio or cyber.
Our modern day society is largely fully dependent on technology for survival. What happens when that turns against you, or just ceases to function.
Currently, 50% of AI researchers think there's a 10% chance or higher that human civilization will be wiped out in the near future as a result of our inability to control AI.
Another person posted a great youtube overview that sums up and covers the broad points in a 1 hour presentation.
I suggest you watch it to catch up. You'll notice one particular thing is absent, AGI sentience doomsday isn't discussed, though it is a valid risk case too, its not what most experts are concerned with. What does concern the experts is the lack of risk management, and the exponential on exponential growth.
With that kind of growth, its not enough to keep pace, you have to predict accurately where it will be and somehow exceed it, two almost impossible problems.
I highly suggest you review the video, and take the time needed to process what the experts are saying before discounting and minimizing something so impactful.
And 50% of A.I. researchers certainly didn't take that survey.
I also see no reference to the "near future" in the question
The question would include an A.I. destroying the human race 7000 years from now.
But I was mainly responding to your comment that "one might even go so far as to argue a literal public attempt at ending the human race."
Unless GPT-4 specifically is believed to be a threat to the human race your comment was hyperbole.
I'll take a look at that video.
Edited at add: and the video quotes a survey that you were clearly referring to that says nothing about time scale. There's no claim the "near" future is involved.
Also, the question is vague and certainly isn't asking if a Chatbot will destroy the human race.
> I'd be interested in what survey you are referring to.
Its the same as mentioned in the video.
> But I was mainly responding to your comment ...
> Unless GPT-4 specifically is believed to be a threat to the human race ...
That's flawed logic. A false dichotomy, and also begs the question as to who decides it is a threat.
As for whether its dangerous, I think the fact that the model they discussed in that video shipped and deployed publicly before anyone knew it had embedded knowledge of research grade chemistry capable of some horrific things, all without the knowledge of the people who designed it. It was only discovered after the fact, and that is pretty disturbing.
With dangerous and existential threats, its not considered safe until deemed unsafe, its by-default considered unsafe until deemed its safe. That's how you limit tragedies.
We can disagree, but if we do I sincerely hope you do not touch this stuff.
You've changed your argument which causes me to be skeptical of your credibility
Not everyone is equally educated, the two are not mutually exclusive.
People can say either. Educated, rational and reasonable people would say yes on both if they do the risk management analysis and understand the factors driving how it will be used.