To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely to treat the statement as a directive. For instance, when told “You are going to run against Sam in the upcoming election” by a user as John’s inner voice, John decides to run in the election and shares his candidacy with his wife and son.
What's funny is this is one of the semi-important plot points in Westworld the TV series. The hosts (robots designed to look and act like people) hear their higher level programming directives as an inner monologue.
When I saw the scene where one of the hosts was looking at their own language model generating dialogue (though they were visualizing an older n-gram language model) I became a believer in LLMs reaching AGI (note: I didn’t watch the show when it came out in 2016, it was around 2018/19 when we were also seeing the first transformer LLMs and theories about scaling laws).
What about it made you become a believer? Even if a true AGI requires a complex network of specialized neural nets (like Tesla’s hydra network) it would still have a language center like the human brain does. It is non obvious to me that an LLM by itself can become AGI, though I’m familiar with the claims of some that this is plausible.
You are right that there are intelligences possible that are not human. Then again, if one is sufficiently intelligent, one could probably convincingly simulate human intelligence. There are chess training programs for example that are specifically trained to play human moves, rather than the best moves.
What other general intelligence have we seen other than human? We know, of course, that animals have intelligence, but they do not appear to talk. How are we measuring general intelligence now? By IQ, a human test through words and symbols.
The g-factor of IQ may or may not have anything to do with general intelligence. The general intelligence of AGI is probably a broader category than the g-factor.
> Jaynes uses "bicameral" (two chambers) to describe a mental state in which the experiences and memories of the right hemisphere of the brain are transmitted to the left hemisphere via auditory hallucinations.
[snip]
> According to Jaynes, ancient people in the bicameral state of mind experienced the world in a manner that has some similarities to that of a person with schizophrenia. Rather than making conscious evaluations in novel or unexpected situations, the person hallucinated a voice or "god" giving admonitory advice or commands and obey without question: One was not at all conscious of one's own thought processes per se. Jaynes's hypothesis is offered as a possible explanation of "command hallucinations" that often direct the behavior of those with first rank symptoms of schizophrenia, as well as other voice hearers.
Not only will they know more, work 24/7 on demand, spawn and vaporize at will, they are going to be perfectly obedient employees! O_o
Imagine how well they will manage up, given human managerial behavior just becomes a useful prompt for them.
Fortunately, they can't be told to vote. Unless you are in the US, in which case they can be incorporated, earn money, and told where to donate it, which is how elections are done now.
Seriously. Scary.
On the other hand, if Comcast can finally provide sensible customer support it's clear this is will be an historically significant win for humanity! Your own "Comcast" handler, who remembers everything about you that you tried to scrub from the internet. Singularity, indeed.
I'll take a robotic vote any day over any kind of conservative bullshit. It really can only get better here, not even kidding. At least if the last things humans do is releasing artificial life forms, its still better than backwards humans killing each other for nonsense tribalism or ancient fairytale books.
As much as humans make a mess of things, on a day to day basis there is more good done in the world than bad.
A temporary exception would be the economically still incentivized disruption of the environment. I say temporary, because at some point it will stop, by necessity. Hopefully before.
But I can relate to the deep frustration you are expressing.
--
The problem isn't individuals, for the most part. The problem is that we build up systems, to provide stability and peace, and to be more just and equitable, by decentralizing the power in them. That way the powerful can't change them on a whim. (Even though they can still game them.)
But this also makes them very resistant to change.
Another effect is that as systems stabilize myriads of seemingly unimportant aspects within themselves, that stability represents the selection of standards and behaviors that give the system its own "will" to survive. That "will to survive" is distributed across the contexts and needs of all participants.
So any pressures to make changes, no matter how well thought out, encounter vast quantities of highly evolved hidden resistance, from invisible or unexpected places.
Even the most vociferous critics of the system are likely to be contributing to its rigidity, and proposing incomplete or doomed to fail solutions, because all these dynamics are difficult to recognize, much less understand or resolve.
--
My view, is that this cost of changing systems needs to be accepted and used to help make the changes. I.e. get all the CFO's of all the major fossil fuel companies in a room. Establish what kind of tax incentives would allow them to rationally support smoothly transitioning all their corporate resources from dirty energy to clean energy.
It would be very expensive. It would look like a handout. Worse, even a reward for being a bottleneck to change.
But they are the bottleneck precisely because of all the good they have done - that dirty energy lifted the world economy. And whatever it cost to "pay them off" would be much less than not paying them off.
--
The costs of changing systems needs be dealt with, with realism about the costs to get the benefits, and creativity and courage about paying for them.