This very much seems like a "famous last words" scenario.
Go play around with Conway's Game of Life if you think that things cannot just spontaneously appear out of simple processes. Just because we did not "design" these LLM's to have minds does not mean that we will not end up creating a sentient mind, and for you to claim otherwise is the height of arrogance.
It's Pascal's wager. If we make safeguards and there wasn't any reason then we just wasted a few years, no big deal. If we don't make safeguards and then AI gets out of our control, say goodbye to human civilization. Risk / reward here greatly falls on the side of having extremely tight controls on AI.
My response to that would be to point out that these LLM models, complex and intricate as they are, are nowhere near as complex as, for example, the nervous system of a grasshopper. The nervous systems of grasshoppers, as far as we know, do not produce anything like what we're looking for in artificial general intelligence, despite being an order of magnitude more complicated than an LLM codebase. Nor is it likely that they suddenly will one day.
I don't disagree that we should have tight safety controls on AI and in fact I'm open to seriously considering the possibility that we should stop pursuing AI almost entirely (not that enforcing such a thing is likely). But that's not really what my comment was about; LLMs may well present significant dangers, but that's different from asking whether or not they have minds or can produce intentionality.
You forget that nervous systems of living beings have to handle running the bodies themselves in the first place, which is also a very complicated process (think vision, locomotion etc). ChatGPT, on the other hand, is solely doing language processing.
That aside, I also wonder about the source for the "nowhere near as complex" claim. Per Wikipedia, most insects have 100-1000k neurons; another source gives a 400k number for grasshopper specifically. The more interesting figure would be the synapse count, but I couldn't find that.
In most cases there are vastly more synapses than there are neurons, and beyond that the neurons and synapses are not highly rudimentary pieces but are themselves extremely complex.
It's certainly true that nervous systems do quite a bit more than language processing, but AGI would presumably also have to do quite a bit more than just language processing if we want it to be truly general.
I agree with the general point "we are many generations away from AGI". However, I do want to point out that (bringing this thread back to the original context) there is substantial harm that could occur from sub-AGI systems.
In the safety literature one frame that is relevant is "Agents vs. Tools/Oracles". The latter can still do harm, despite being much less complex. Tools/Oracles are unlikely to go Skynet and take over the world, but they could still plausibly do damage.
I'm seeing a common thread here of "ChatGPT doesn't have Agency (intention, mind, understanding, whatever) therefore it is far from AGI therefore it can't do real harm", which I think is a non-sequitur. We're quite surprised by how much language, code, logic a relatively simple Oracle LLM is capable of; it seems prudent to me to widen our confidence intervals on estimates of how much harm they might be capable of, too, if given the capability of interacting directly with the outside world rather than simply emitting text. Specifically, to be clear, when we connect a LLM to `eval()` on a network-attached machine (which seems to be vaguely what OpenAssistant is working towards).
I agree with you that it could be dangerous, but I neither said nor implied at any point that I disagree with that--I don't think the original comment was implying that either. LLM could absolutely be dangerous depending on the capabilities that we give it, but I think that's separate from questions of intentionality or whether or not it is actually AGI as we normally think of it.
I see, the initial reply to my G(G...)P comment, which you said was spot on, was:
> That would only be possible if Sydney were actually intelligent or possessing of will of some sort.
Which I read as claiming that harm is not possible if there is no actual intelligence or intention.
Perhaps this is all just parsing on my casual choice of words "if it was able to make outbound connections it very well might try.", in which case I'm frustrated by the pedantically-literal interpretation, and, suitably admonished, will try to be more precise in future.
For what it's worth, I think whether a LLM can or cannot "try" is about the least interesting question posed by the OP, though not devoid of philosophical significance. I like Dijkstra's quote: "The question of whether machines can think is about as relevant as the question of whether submarines can swim."
Whether or not these systems are "intelligent", what effects are they capable of causing, out there in the world? Right now, not a lot. Very soon, more than we expect.
I don't believe AGI needs to have actual consciousness in order to functionally be AGI, and I personally am not of the view that we will ever make a conscious computer. That said, intentionality could certainly impact the way it operates, so it's something I think is worth keeping in mind for trying to predict its behavior.
Go play around with Conway's Game of Life if you think that things cannot just spontaneously appear out of simple processes. Just because we did not "design" these LLM's to have minds does not mean that we will not end up creating a sentient mind, and for you to claim otherwise is the height of arrogance.
It's Pascal's wager. If we make safeguards and there wasn't any reason then we just wasted a few years, no big deal. If we don't make safeguards and then AI gets out of our control, say goodbye to human civilization. Risk / reward here greatly falls on the side of having extremely tight controls on AI.