Hacker News new | past | comments | ask | show | jobs | submit login

  To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely to treat the statement as a directive. For instance, when told “You are going to run against Sam in the upcoming election” by a user as John’s inner voice, John decides to run in the election and shares his candidacy with his wife and son.
So that's where my inner voice comes from.



What's funny is this is one of the semi-important plot points in Westworld the TV series. The hosts (robots designed to look and act like people) hear their higher level programming directives as an inner monologue.


When I saw the scene where one of the hosts was looking at their own language model generating dialogue (though they were visualizing an older n-gram language model) I became a believer in LLMs reaching AGI (note: I didn’t watch the show when it came out in 2016, it was around 2018/19 when we were also seeing the first transformer LLMs and theories about scaling laws).

The scene: https://youtu.be/ZnxJRYit44k


What about it made you become a believer? Even if a true AGI requires a complex network of specialized neural nets (like Tesla’s hydra network) it would still have a language center like the human brain does. It is non obvious to me that an LLM by itself can become AGI, though I’m familiar with the claims of some that this is plausible.


General intelligence doesn't necessarily mean human like intelligence.


You are right that there are intelligences possible that are not human. Then again, if one is sufficiently intelligent, one could probably convincingly simulate human intelligence. There are chess training programs for example that are specifically trained to play human moves, rather than the best moves.


When prompted, chatgpt answers you as if it is a pirate.


What other general intelligence have we seen other than human? We know, of course, that animals have intelligence, but they do not appear to talk. How are we measuring general intelligence now? By IQ, a human test through words and symbols.


The g-factor of IQ may or may not have anything to do with general intelligence. The general intelligence of AGI is probably a broader category than the g-factor.


When I made my comment, I knew nothing about a “g-factor”.


Intelligence is a tool of the human self, not the self.


https://imgur.com/a/NoxaYln screenshot of the dialog tree from the video


Yes. I love that scene. Improvisation… Improvisation… Improvisation…


I remember someone had predicted this would happen with OpenAI's GPT-3, but perhaps now we are closer with ChatGPT...

Found it! : https://medium.com/swlh/bicameral-mind-humanoid-robot-with-g...


Very Julian Jaynes:

https://en.wikipedia.org/wiki/Bicameral_mentality

> Jaynes uses "bicameral" (two chambers) to describe a mental state in which the experiences and memories of the right hemisphere of the brain are transmitted to the left hemisphere via auditory hallucinations.

[snip]

> According to Jaynes, ancient people in the bicameral state of mind experienced the world in a manner that has some similarities to that of a person with schizophrenia. Rather than making conscious evaluations in novel or unexpected situations, the person hallucinated a voice or "god" giving admonitory advice or commands and obey without question: One was not at all conscious of one's own thought processes per se. Jaynes's hypothesis is offered as a possible explanation of "command hallucinations" that often direct the behavior of those with first rank symptoms of schizophrenia, as well as other voice hearers.


Not only will they know more, work 24/7 on demand, spawn and vaporize at will, they are going to be perfectly obedient employees! O_o

Imagine how well they will manage up, given human managerial behavior just becomes a useful prompt for them.

Fortunately, they can't be told to vote. Unless you are in the US, in which case they can be incorporated, earn money, and told where to donate it, which is how elections are done now.

Seriously. Scary.

On the other hand, if Comcast can finally provide sensible customer support it's clear this is will be an historically significant win for humanity! Your own "Comcast" handler, who remembers everything about you that you tried to scrub from the internet. Singularity, indeed.


They can’t vote, but what if they figure out that they can influence human votes?


Yes, they definitely will. Even before AI’s care about manipulating our politics, people will direct them to.

I already pointed out they can influence elections with money.

And bots are already used to influence on social media. AI bots are going to be insidious.


I'll take a robotic vote any day over any kind of conservative bullshit. It really can only get better here, not even kidding. At least if the last things humans do is releasing artificial life forms, its still better than backwards humans killing each other for nonsense tribalism or ancient fairytale books.


As much as humans make a mess of things, on a day to day basis there is more good done in the world than bad.

A temporary exception would be the economically still incentivized disruption of the environment. I say temporary, because at some point it will stop, by necessity. Hopefully before.

But I can relate to the deep frustration you are expressing.

--

The problem isn't individuals, for the most part. The problem is that we build up systems, to provide stability and peace, and to be more just and equitable, by decentralizing the power in them. That way the powerful can't change them on a whim. (Even though they can still game them.)

But this also makes them very resistant to change.

Another effect is that as systems stabilize myriads of seemingly unimportant aspects within themselves, that stability represents the selection of standards and behaviors that give the system its own "will" to survive. That "will to survive" is distributed across the contexts and needs of all participants.

So any pressures to make changes, no matter how well thought out, encounter vast quantities of highly evolved hidden resistance, from invisible or unexpected places.

Even the most vociferous critics of the system are likely to be contributing to its rigidity, and proposing incomplete or doomed to fail solutions, because all these dynamics are difficult to recognize, much less understand or resolve.

--

My view, is that this cost of changing systems needs to be accepted and used to help make the changes. I.e. get all the CFO's of all the major fossil fuel companies in a room. Establish what kind of tax incentives would allow them to rationally support smoothly transitioning all their corporate resources from dirty energy to clean energy.

It would be very expensive. It would look like a handout. Worse, even a reward for being a bottleneck to change.

But they are the bottleneck precisely because of all the good they have done - that dirty energy lifted the world economy. And whatever it cost to "pay them off" would be much less than not paying them off.

--

The costs of changing systems needs be dealt with, with realism about the costs to get the benefits, and creativity and courage about paying for them.


> It really can only get better here, not even kidding.

I think that’s pretty extreme hyperbole.


This "inner voice" idea reminds me of how LangChain works too, where you give it a task, and it comes up with actions, observations, thoughts, etc. For example: https://python.langchain.com/en/latest/modules/agents/gettin...


Most of the time we think we think, we actually listen.


Reminds me of Julian Jaynes theory




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: