>For example, connecting a LLM to the internet (like, say, OpenAssistant) when the AI knows how to write code (i.e. viruses) and at least in principle hack basic systems seems like a terrible idea.
Sounds very cyber-punk, but in reality current AI is more like average Twitter user, than a super-hacker-terrorist. It just reacts to inputs and produces the (text) output based on it, and that's all it ever does.
Even with a way to gain control over browser, compile somehow the code and execute it, it still is incapable of doing anything on it's own, without being instructed - and that's not because of some external limitations, but because the way it works lacks the ability to run on it's own. That would require running in the infinite loop, and that would further require an ability to constantly learn and memorize things and to understand the chronology of them. Currently it's not plausible at all (at least with these models that we, as a public, know of).
Sounds very cyber-punk, but in reality current AI is more like average Twitter user, than a super-hacker-terrorist. It just reacts to inputs and produces the (text) output based on it, and that's all it ever does.
Even with a way to gain control over browser, compile somehow the code and execute it, it still is incapable of doing anything on it's own, without being instructed - and that's not because of some external limitations, but because the way it works lacks the ability to run on it's own. That would require running in the infinite loop, and that would further require an ability to constantly learn and memorize things and to understand the chronology of them. Currently it's not plausible at all (at least with these models that we, as a public, know of).