Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Embodiment is indispensable for AGI (keerthanapg.com)
90 points by keerthanpg on June 7, 2022 | hide | past | favorite | 153 comments


The idea that embodiment is necessary for intelligence is just another example of anthropocentrism.

It's likely that we will soon have AGI that can solve the protein folding problem, play go at a superhuman level, and create arbitrary images from a text prompt, yet still underperform humans when it comes to proprioception and object manipulation.

Humans have evolved specifically for proprioception for billions of years, it's a core survival skill that all animals have in common. In contrast our history of language and abstract thinking only go back less than a million years.

Human brains are overfit to our environment, which is why we're so good at recognizing human faces and so bad at mathematics. A intelligence created from a blank slate need not share our limitations - it could perceive in more than 3 dimensions, it could grasp quantum amplitudes as easily as we understand integers, it could have multiple embodiments and experiences simultaneously.


We won't be able to recognize intelligence if it is not anthropocentric. The Earth could be sentient but we won't be able to tell.


That is the premise of Solaris, from Stanislaw Lem. Great book, and movie too (the original, not the remake).


I think David Brin's novel Earth had something similar - a book that I mostly remember for its version of WW3 being everyone vs Switzerland.


I'm inclined to agree that it likely is necessary for general intelligence. It's like the Mary's Room thought experiment. You don't understand what the color red is if you've never seen it, no matter how well you understand the abstract concept of red.

An AI can never truly understand the physical world without direct experience of it. Now, I believe you probably could build something that could be called an AGI without embodiment, but it would be greatly limited by its lack of experience of the physical.


Wouldn't any definition or requirement we come up with for intelligence be an example of anthropocentrism?


What's "anthro" about bodies? Spiders have bodies. Heck, cars have bodies.

Anyway, I think you're missing the point a little. Sure, we could build a super intelligent agent that lives entirely in the stock market or some disembodied place. And if the stock market were a closed system, that intelligence could (theoretically at least) reign supreme in that realm.

But when people say artificial "general" intelligence, what the "general" means is "for typical Earthly challenges". And there is no way a critter could be good at typical Earthly challenges without... you know... existing on Earth, growing up on Earth, and learning by interacting with Earth. A.k.a having a body.


This is kind of a weird discussion.

"Embodiment" here seems to mean "having a body," i.e. being able to perform motor tasks in the external world (as opposed to the way it's used in "Embodiment and the Inner Life" [1]).

In that case, I don't think embodiment could ever be considered a necessary condition for AGI. But AGI would need to be able to learn motor tasks if given a body (in order to be called "general").

A deeper question, and one I don't see enough discussion of, is whether a certain kind of embodiment is necessary for AGI to "feel something." I don't think this question is definitively answerable, but it's super important. Can a CPU experience qualia? Or do you need something more "meaty" and analog? Does the material matter at all, or is consciousness (in the hard-problem sense) just a pattern of information?

An even more interesting and relevant question might be - is "feeling" (i.e. having qualia) a necessary condition for AGI?

[1] https://www.alibris.com/search/books/isbn/9780199226559


If we are physical beings, and CPUs can simulate physics, then a CPU can experience everything that we experience, including "qualia".

Do you think we're metaphysical beings, or that physics are beyond the capacity of a CPU to simulate?


Alternatively, we are analogue beings made up of systems that appear to be best described by chaotic continuous non-linear math, and digital computers can roughly simulate physics to a given level of precision, which can be as fine as you care to make it… at the cost of ever increasing processing requirements. And the amount of physics involved in simulating a single human cell, like a neuron, is already vast.

Whether we can simulate a “large enough” physical system, in as much detail as is required, at a fast enough speed to produce any results in a meaningful timeframe, is an open question.

We don’t even know how big the simulation needs to be, in terms of neurons, to produce qualia. Or how we’d know if it were. Or how much detail/resolution is needed.

It’s like brute forcing the travelling salesman problem for a 10,000 node graph. Yeah, we know exactly how to do it with a digital computer. The math is simple. It’s just going to take longer than the heat death of the universe to get the answer.


Who says computers have to be digital? Analog computers are already a thing.


The preceding discussion was specifically about "a CPU". Analog computers are specifically excluded.

If, despite that, you're trying to simulate an analog entity with an analog computer, or build any analog AGI, the question arises whether it's possible to build one any simpler or faster than existing biological computers, e.g. humans. If no, then why bother?


Considering evolution optimized for biological fitness and not intelligence, it would be incredibly surprising if it wasn't possible to do better, especially using a vastly different architecture.

You could also build an AGI to run on a digital CPU but interact with a simulation that used analog coprocessors (which is a thing these days https://www.youtube.com/watch?v=GVsUOuSjvcg).

You can also include quantum coprocessors (which already exist in their infancy) for various things (probably only useful for quantum simulations at this point though).

Also considering recent ML work, an AGI is more likely going to run on something similar to a GPU than a CPU.


> Considering evolution optimized for biological fitness and not intelligence, it would be incredibly surprising if it wasn't possible to do better, especially using a vastly different architecture.

The way I see it is that humans are the stupidest you can be and still create a society. Like a nuclear reaction, you need average human to add a tiny bit of structure on average and it works, any less than that and society is impossible, any more than that and society forms extremely quickly, so fast that evolution has no time to work.


Given the complexity of the physics apparently involved in brain state, the "and CPUs can simulate physics" needs to do a lot of work for it to be a practical possibility.

Simulating the physics of dopamine on neurotransmitter cells in silicon is an extremely complicated roundabout way of getting a program to run faster in response to certain stimuli, but most alternatives are likely to fall well short of conveying the full sensory experience of human emotion. And its a difficult goal to know you've hit when even simple text-parsing neural networks which don't attempt to model human sensory experience can plausibly mimick descriptions of it...


This cannot possibly be true, because we have no concept of what qualia are.

There is no objective external way to determine if an entity experiences qualia at all. From outside we can only consider behaviour, not subjective experience.

Personally I can't be sure that anyone else experiences qualia. It seems like a reasonable assumption, but I'm basing that on a mapping from my own experiences to those of others.

If everyone else was a highly sophisticated automaton I wouldn't know.

This creates a paradox. The only way to prove qualia exist would be to invent an objective qualia-ometer which would somehow measure subjective experience.

But of course my experience of the qualia-ometer would also be subjective, so the existence of qualia in others cannot be proven objectively.

You can imagine some kind of meta-technology which allowed qualia to be shared, so everyone could experience everyone else's experiences.

Not even that would be absolute proof because experience would still be subjective.

The point being - assuming you can throw cycles at this is looking at it from the wrong end of the telescope. It doesn't work like that because it can't work like that.

It's a much harder problem. (Which could be why it's called the Hard Problem.)

Worse, it just highlights that all of our experiences are qualia. There is nothing objective about anything we experience. What we think of as objective experience is really just stable shared experience.

You can make an argument that the more complicated, stable, surprising, and shared experience is, the more likely it is to be objective.

That's sort-of plausible and possibly a good rule of thumb. But it still doesn't really make the problem go away.


I find it most satisfying to act as if self-consciousness, free will, and qualia are all effectively "illusions" that are emergent properties of sufficiently complex brains, that it's unlikely we'll ever resolve any of the Hard Problems, but that we will be able to construct machines that exhibit as much free will as humans. It doesn't resolve any really important problems, but at least it moves the discussion past "is qualia required for AGI", which clearly it isn't.


Sorry, but I like to work on problems that have solutions and not ones which are so ill-defined that they don't have _any_ satisfying solutions.


Some (some of them in this very comment section) take this and smugly declare qualia don't exist at all.


Your conclusion doesn’t follow from your premises, and your second premise is false besides as others have pointed out — CPUs CANNOT simulate even classical physics exactly, and certainly not quantum physics.

But even if such a complete simulation were possible, there’s every reason to assume a CPU would lack the consciousness to experience anything. When you simulate a hurricane does the CPU get wet?


> your second premise is false besides as others have pointed out — CPUs CANNOT simulate even classical physics exactly, and certainly not quantum physics.

Both classical and quantum physics can be simulated on a classical computer, to an arbitrary degree of precision. Granted, the case in which infinite precision (if such a thing even exists in reality) is required is not simulatable on a discrete computer, but do any experts actually believe this to be the case? It's certainly not an opinion that I've seen around.

I think discussions about "can we actually get enough computing power to do this in practice" are beside the point - the discussion was about whether computers can feel in principle. If we wanted to do it in practice and were at the point where this was feasible, we'd probably engineer a CPU or co-processor more suited to the task than the general-purpose CPUs of today.

> there’s every reason to assume a CPU would lack the consciousness to experience anything.

If we are physical beings, then "consciousness" and anything else we have must be an emergent property of our physical components. If we can simulate those physical components, then this simulation will exhibit the same properties - consciousness and anything else one can attribute to us.

If our consciousness comes from non-physical properties we have (a "soul" or anything metaphysical), then sure, I'd agree with you.


> If we are physical beings, then "consciousness" and anything else we have must be an emergent property of our physical components. If we can simulate those physical components, then this simulation will exhibit the same properties - consciousness and anything else one can attribute to us.

Again, a simulation is not the thing. The map is not the territory. If consciousness truly emerges from actual physical processes of interacting brain matter (seems plausible), those _don’t exist_ in a computer simulation.

In a simulation of a brain, from what substrate could consciousness emerge? The state of the simulated brain is stored in an arbitrary subset of locations in RAM, unknown to and non-interactive with each other, along with loads of other stuff the computer is keeping track of. Do you think consciousness could emerge automatically from the state of the right subset of locations in RAM, or is it whenever a relevant value in memory is changed due to a transistor opening, or is it when the simulation computation that will result in the RAM update is happening, or is complete? Per the Chinese Room argument, would consciousness still emerge if half the operations were actually performed off-CPU by human mechanical turkers with rule books and notecards? Nothing in the abstract computation will have changed.

Consider also that physical reality runs in full parallel, while simulations on computers run serially per core. So if consciousness emerging requires the simultaneous interaction of many moving brain parts, that isn’t something that happens in a computer simulation.

> Both classical and quantum physics can be simulated on a classical computer, to an arbitrary degree of precision

Quantum physics can’t be simulated on a classical computer to an arbitrary degree of precision. Feynman didn’t think so, and he hasn’t been gainsayed yet. And classical physics is full of chaos and very sensitive to precision.


> In a simulation of a brain, from what substrate could consciousness emerge?

Exactly the same substrate as our brains are derived from: physical particles and their interactions, perfectly replicated inside the simulation. If the simulation is accurate enough, the real particles and the simulated particles behave exactly the same, hence they produce the same results.

> Do you think consciousness could emerge automatically from the state of the right subset of locations in RAM

Hard question to answer since consciousness is hard to analyse. But we can turn it around into a question whose answer is the same, with a bit of rephrasing:

Do you think consciousness could emerge automatically from the state of the right subset of particles in our physical world, or is it whenever a relevant particle state are changed due to particles interacting according to the laws of physics, etc etc

> Consider also that physical reality runs in full parallel,

We don't really know this to be the case. It looks like that to us, but that could easily be an illusion created by mechanisms we can't observe. Just as characters in a video game can't observe how their world is simulated - everything is perfectly consistent whether it was calculated in one CPU thread or several.


Or a more succinct question: why do you think a simulation of consciousness is the same as consciousness? What other simulations of things are identical with the things?


I think that a sufficiently accurate simulation of a system exhibits the same emerging properties as the system itself.

For example, if I can perfectly simulate the weather in some simple planet, all possible emerging weather phenomena for that planet (say like clouds, rain etc) will be perfectly replicated in the simulation. Similarly, if we can perfectly simulate a human body, all of the emerging human phenomena will exist in the simulation (muscle movement, nerve impulses, brain patterns resulting in consciousness etc). I don't think consciousness is fundamentally different from other physical phenomena, it's just a particularly complex example.

Another angle to think about: We can't prove that we're not living in a simulation (or can you?). So our consciousness itself might be simulated for all we know. This is not a proof that we are amenable to being simulated, but it means that disproving it is very hard or impossible.


You lost me. You think consciousness is a _physical_ phenomenon that would necessarily emerge from an accurate _simulation_ of a particle system? If it’s a physical phenomenon in reality, then just like the clouds and rain in your weather sim aren’t physical, only a simulation of consciousness will be present in your simulation.


Feel free to replace "physical phenomena" with "phenomena caused by physics laws" if it makes more sense that way.


What is every reason to assume consciousness has an astral component?


We can definitely simulate things approximately, but exact simulations are another matter entirely, and how much do you lose in the approximation? PBS spacetime discussed the intractability of simulating even one atom exactly: https://www.youtube.com/watch?v=55c9wkNmfn0


It's probably a good idea to have at least a rough physical definition of "qualia" and empirically validate it, without that one can't judge if a simulation is accurate or not.


If an AGI has a body in a simulation, it has a body. If such a thing is a necessary condition for general intelligence then the author's hypothesis is correct.


It’s all about learning causation. It may not be strictly necessary, but it’s damned useful. If you’ve ever tried to learn causation from observational data or from a randomized trial, you’ll know which one is incredibly easier. If you can’t affect the data, it’s worth a lot less. Like learning to play a game without actually engaging in exploration. It can technically be done, but it’s so much harder.


This is right, people are getting overly hung up on the idea that an AGI needs fingernails.

It doesn't need fingernails but it absolutely needs the ability to effect change in the world it is meant to be "intelligent about" and learn from the results.


"qualia" is the bikeshedding for any discussion on intelligence (biological or artificial). It's an unfalsifiable concept that's much easier to reach to than engaging in the concepts that are actually relevant to modern work in those fields.

Posting this is a bit of trying to hold back the tide, because the dominant threads in HN AI/BI discussions are almost always horribly uninformed, but I'm posting this so that if anyone else who actually does work in those fields is reading this and wondering what on earth is going on, now you know you're not alone!


Eh? I’m not surprised that AI researchers are not informed about the philosophy of mind, and I doubt that they need to be in order to do their work. But the quale is an important concept and just assuming it doesn’t exist because AI researchers don’t need it is a bit silly.


It's just not relevant. It's an ill-defined measure for something that (all due respect) makes philosophy majors feel like they're participating in something they're unqualified to participate in.


With similarly all due respect, I'm not sure you know what you're talking about here. Philosophers aren't qualified to pontificate on the nuances of building language models, but they can have plenty to say on consciousness and intelligence. When you ignore them you get ... let's call them funny tweets saying that some large language models might be conscious.


Sorry, my prior comment was incredibly rude - must have been in a bad mood but that's no excuse. I agree that outside perspective is valuable (although I would wager that such tweets are more attempts at getting headlines/funding than at being earnest).


I agree. You might as well replace the word "qualia" with "soul" and the argument stays exactly the same.


Having a body basically gives you a way to interact the world. That is only good if you want to model the world and the actuator-control limits.

Now, if we want to model the world (or build "world model", in LeCun's words), I am not sure "interacting" with the world is a necessary condition given the amount of data we already have. GPT-3 probably has some vague "world model" inside these parameters and we don't know how to effectively extract it out.


Can a CPU experience qualia?

The mind has a physical structure that supports it. We are still unable to understand a meaningful portion of that structure or replicate it.

Creating a concept to speculate about a system that we have no objective knowledge, based on subjective experience is useless. The name is catchy for a song though.


It's hard to usefully speculate, but I don't think it's right to imply that it's an unimportant question either. Even putting aside the more out-there scenarios, we don't want to look back in 2100 and say "oops, AIs have had qualia for the past 25 years, and we've been real jerks to them".


It's impossible to not be embodied. Controlling CPU and other hardware like motherboard and computer screen is an embodiment.

Why would moving physical objects like electrons by micro meters not be good enough, but by meters it would?


It's not that the computation is physical that matters- it's the feedback and interaction from the world at human scales that a physical form would provide. The assumption is that anchoring an AGI to our scale will lead to human-scale perception and solutions.


Sure, no argument here.

But the question "Can a CPU experience qualia?" can not be answered by saying:

"no, because CPU computation exists on some platonic ideas plane".

Computation is physical both in brain and in silica.


It's not micrometers vs meters, or electrons vs macro objects. It's primarily a matter of sensor density. That is, I feel my fingers move in a way that a CPU does not feel an electron move. When I speak, I experience far more than a CPU experiences writing some bits to some output ports. And so on.

Does that mean that embodiment is required? No. But your definition of "embodied" misses the point of the discussion.


It's trivial to give a regular CPU better sensing than you in dozens of variables (pressure, temperature, voltage, CO concentration, etc), so your point is rather weak.


I absolutely agree. You can, and better than humans. Not only better for any one sense, but also more senses than humans have.

But that isn't what Isinlor was talking about. They said "moving electrons around" within the normal CPU circuitry as if that was the same as actuators controlling the external world. It's not.


Moving electrons in silicon circuits = Moving K+ and Na+ ions in cells. Same principle. There is nothing inherently special, sacred or non-physical about our circuits. They are very energy efficient (compared to current standards), highly parallel and it is a kluge hard to reverse-engineer but other than that they are sensors, processors and actuators. This blueprint is universal, trivially so.


As UmbertoNoEco says, CPUs absolutely are actuators controlling flow of electrons.

In fact a single transistor is both a sensor and actuator.

In fact you can use them to do things they were not designed to do like trough row hammer:

https://en.wikipedia.org/wiki/Row_hammer

CPUs can also detect things like collisions with cosmic rays.

CPUs just can't produce macroscopic forces relevant on human scale.


Sensation isn't the issue here, perception is.


"Chemistry is not the issue, life is"


Exactly.

The baseline assumption in modern neurology is that meaning, for living things, is derived from their physical experience of the world. In short that no paper definition can ever substitute for experiencing first-hand what the "effect" of something is, what it does to you or to the world and what that entails for your body, your life. That is what "knowing" and "meaning" ultimately are.

These experiences are the basis for associations in our constant cognitive (and emotional, instinctive, sensory, symbolic, etc) modelling of the world. Associations beget complexity (dimensionality, modalities) of the model. Hence, "finely tuned" or "high definition" i.e. ever more "advanced" intelligences.

Given the overall crudeness and extreme simplicity of a CPU (or any man-made machine) compared to any living organism, I'm actually willing to bet that should CPU-based AI's "feel" something, it's on the order of 1/100th of what a shrimp feels, and they'd develop real-world intelligence at that level. Infinitely smarter than a potato, yet infinitely dumber than a fish. You don't run ape software on amoeba hardware.


You will also encounter issues here.

Transistor is a sensor. It detects voltage on the base pin. Transistor density and so sensor density on a modern CPU is really high. In fact modern CPU transistor density is 3 orders of magnitude higher than nerve density even on your retina.

It's at the order of ~100,000 rods/mm^2 for retina and ~100,000,000 transistors/mm^2 for modern CPU.


You are technically correct, but there is an implied locus when people say something is "intelligence".

The ghosts in Super Mario World are "intelligent" and "embodied" in Super Mario World, but they are not "embodied" on Earth and thus cannot be "intelligent" there.

The "G" in AGI implies intelligence on Earth, not in a video game.


Super Mario World is on Earth, unless you run it on ISS.


No, the computer that runs it is on Earth, but Mario’s actual world is nowhere on earth, it’s a fully distinct reality.


Whenever I see "qualia" being discussed I just roll my eyes.

Asking whether a CPU can experience qualia is kind of like asking if atoms can experience anything. The CPU is just the substrate on which software runs.


You're being quite dismissive of a very real question. Where does our subjective experience come from? If there was an AGI with human-level intelligence, would it have its own subjective experience? How could we possibly know (either way)?

For that matter, does a brain have a subjective experience, or is it just the substrate on which a mind runs?


Sometimes questions need to be dismissed, including very real questions that many people in good faith spend their lives thinking about and discussing. Sometimes the "answer" to a question is the dismissal of the question, or more accurately, the solution to the problem the question refers to involves an explanation that the question was ill-formed or nonsensical.

I won't say I'm at that point with the question of qualia, but I share the other commenter's reflexive eye-rolling when I hear it invoked. Perhaps the question doesn't deserve to be dismissed, but I'd certainly like to see some new clarity about what the question is really asking, what problems would be solved by finding an answer to the question, or heck, even a remotely plausible description of what form a valid answer might take.


Well, on the other hand I also roll my eyes pretty hard whenever I see a post that pretends that subjective experience is some esoteric concept.

The question is simply "Will robots feel just like you do, or will they merely be pretending?" - it's something any child could understand. Answers could be something like "No, because they lack X", "Yes, but only those that have X" or "Yes, everything does (panpsychism)".

Do you care about potential AI suffering? If you care about animal suffering, the answer to the question would have pretty obvious and major ethical implications.

Now, I'm not convinced that humans can ever determine an answer, so I can understand thinking it's just navel gazing, but I imagine the same applies to knowing what happened before the big bang. I think it's quite evident that this doesn't mean either question is malformed.


Of course children can understand (and be fascinated with) questions like “can a robot feel sadness the same way I do?” or “can my classmate feel sadness the same way I do?” Children can also understand (and be fascinated with) questions like “do dogs go to heaven?”

But what is the epistemological status of these questions? That’s what I’m concerned with, not whether children can understand and be fascinated with these questions.


Then I don't understand why you're acting like it's some vague, malformed question. The things you asked for, what the question is really asking, what problems would be solved and plausible answers are all blindingly clear.

Unless you're religious, the epistemic status of "do dogs go to heaven?" is obviously "no" (or "we have no way of knowing"). If you think the answer to "can robots experience qualia?" is equally trivial, then just say so, instead of going on some spiel about how actually the question is unclear and nonsensical when it pretty obviously isn't.


I can't be sure that other people feel in the same way that I do. I assume they do because they act like it and so I go along with it.

If I can't be 100% sure that people feel in the same way that I do, then asking that same question for robots is totally pointless.


I'm pretty sure your rationale is more complex than that, you probably also account for things like "how similar are they to me?". After all, you wouldn't believe a video game character if they told you they were conscious, right?

>If I can't be 100% sure that people feel in the same way that I do, then asking that same question for robots is totally pointless.

I don't see how that at all follows.


There's no asking, let alone answering this question beyond mere speculations, until we've defined "subjective experience" in objective terms — i.e. positively provable, measurable in some unambiguous way (not merely inferred). That is the hard problem. Solving it might require reframing the problem such that this dichotomy isn't even relevant anymore (like say electromagnetic information isn't "opposed" or "dual" or "exclusive" to a gravitational field but merely concurrent) if we are to unify somehow the general field of "theory of mind".


This issue of "definition" is missing what's at stake. The issue is modelling nature in such a way that plausibly includes subjective experience. And then validating this model. But subjective experience is intrinsically inaccessible to empirical validation. We can test the physical predictions of the model, but how subjective experience fits into the model is an orthogonal question. If we are to answer the question of subjective experience, we need to embrace "speculation" without empirical anchoring to guide our investigations. Shutting down speculation is just to shut down any investigation at all.


It strikes me as odd to start from the position "we declare this thing to undeniably exist" and then spend generations trying to spit out even one sentence describing what evidence or explanation for its existence might even plausibly look like.


…to the point of claiming it "must be present in all things, living or otherwise, even particles, yet perfectly invisible" (panpsychism, iirc?). Because why not generalizing a flawed hypothesis whose observational range is limited to a tiny subset of existence (advanced biological species, as far as we know). ¯\_(ツ)_/¯

IMHO it's the old darkness/duality problem all over again — postulating some "essence" (e.g. the spiritual/religious Soul) and basing theory on a mere intuition, instead of invoking a "placeholder" (like "dark matter" isn't necessarily matter but literally "something that behaves as such" whatever it actually is, or like "dark energy" isn't necessarily a fifth interaction but certainly behaves as such or as anti-gravity of sorts; wherein "dark" means "unknown and hidden from direct observation, merely inferred"). Hence my call to not infer the very measurement thereof if we are to call an explanatory hypothesis.

Should we start by stating that "qualia" or "consciousness" or whatever is a placeholder name for "whatever dark (unknown, black-box) mechanism explains the observable phenomenon named as such", then we might have a chance at actually forming a coherent theory thereof. IMHO.


> Should we start by stating that "qualia" or "consciousness" or whatever is a placeholder name for "whatever dark (unknown, black-box) mechanism explains the observable phenomenon named as such", then we might have a chance at actually forming a coherent theory thereof. IMHO.

Perhaps, although in the case of qualia I’m not sure what the phenomenon is that we need a placeholder explanation for. For dark matter we’ve got certain observations of galaxies. For qualia we’ve got, what? A lot of people who feel very strongly that they are unable to deny the existence of something that they cannot describe beyond that claim of undeniable existence? If that’s how astrophysicists explained dark matter I’d be inclined to think that maybe it just doesn’t exist.

Edit: thanks to autocomplete we were briefly embroiled in an argument about the existence of quails.


Can't speak for other humans of course, but I'm more certain my sensory experiences exist than anything else, to the point searching for an evidence base for their existence or concluding they didn't exist unless explained would just be weird. I'm also certain my sensory experiences are not identical to attempts to quantify them or describe them in language or interpretive dance (philosophers struggling to define and articulate things even at high levels of abstraction is weak evidence in favour of qualia, since the defining aspect of qualia is its the portion of human brain state that can't be communicated). The alternative: that its a practical possibility to translate all parts of my sensory experiences into data which can be transmitted and parsed in exactly the same way by other minds feels like the proposition needs some evidence or justification [beyond "Your brain is composed of atoms. Other atoms exist"]

Seems to me that the bafflement I'm experiencing at people doubting human sensory experience is the real bit, and the neural network that might be able to perfectly replicate my bafflement in silicon is the purely hypothetical construct.


> The alternative: that its a practical possibility to translate all parts of my sensory experiences into data which can be transmitted and parsed in exactly the same way by other minds feels like the proposition needs some evidence or justification [beyond "Your brain is composed of atoms. Other atoms exist"]

In theory, if we had high-throughput brain-brain interfaces that could be connected between your thalamus and someone else's, then yes, they could experience the same sensations as you. Of course, besides the challenge of having high-throughput hardware embedded in your brain without killing you, there's also the challenge that brains don't have exact correspondences between every neuron for different people, so any such mapping would be quite complex.

But there is an actual example of a pair of conjoined twins who share the same thalamus and can experience the same sensations as one another: https://en.m.wikipedia.org/wiki/Krista_and_Tatiana_Hogan

> it was confirmed that they share a thalamus which connects their brainstems. Through this shared brain tissue structure and the interconnected neurons, one brain receives signals from the other brain and vice versa. This documentary also reported on experiments that were carried out that confirmed that visual cortex signals based on what one girl saw, were received by both girls' brains. So in effect, one twin could see what the other twin was seeing, making them unique even among craniopagus twins.

Also...

> I'm also certain my sensory experiences are not identical to attempts to quantify them or describe them in language or interpretive dance (philosophers struggling to define and articulate things even at high levels of abstraction is weak evidence in favour of qualia, since the defining aspect of qualia is its the portion of human brain state that can't be communicated).

Language is a very low-fidelity way to express information; we can use abstractions to make up for quite a bit, but it doesn't make sense to think that we're accurately transmitting what's going on in every molecule of our bodies when we say what something feels like to us. And even if we could send an accurate account of all those details, who would have the time to parse through and make sense of it all? And that would be a momentary expression of who we are; it'd be inaccurate in the next moment.


> Language is a very low-fidelity way to express information; we can use abstractions to make up for quite a bit, but it doesn't make sense to think that we're accurately transmitting what's going on in every molecule of our bodies when we say what something feels like to us

That's sort of the point though, it's incredibly low fidelity (so low fidelity that computers we're absolutely sure don't have sensory experiences can appropriately and plausibly describe them) yet just about high fidelity enough for me to be sure that people experiencing synesthesia must be having different experiences than me, and there's no particular reason to believe we are or ever will be on a path to me being able to experience their experience in any meaningful way. Even in the twins' case where the transmission mechanism is literally the same substrate connected to the same physical organs that have developed simultaneously, the reports of them having different personalities and preferences suggests that they have different sensory experiences from [partially] shared inputs.

The thorny question then becomes whether sensory experiences are important to independent goal formation and other stuff often considered part of general intelligence. I don't think we know enough to answer that question, but we certainly don't know enough to dismiss it.


I know it's a bit far fetched but I'd go as far as to say that, by Occam's razor, the conscious experience does not exist. A world where the "conscious experience" does not exist would be indistinguishable from our world; ergo, the "conscious experience" does not exist.


A world where nothing but conscious experience exists and what you think as the world is simulated would be indistinguishable from your experience. Ergo, the “real world” does not exist.


By Occam's razor here, an un-simulated world ("real world") would be a more simple hypothesis because a simulation would require a simulator to be posited.


Probably it is a valid question, but I lost my patience a long long long time ago with the David Chalmers and Thomas Nagels of this world. They love to play a god of the gaps game with consciousness which it is really boring. The same thing some scientists did regarding life in the 19th century , which still some people play btw: the search for the supposed elan-vital. Count me in with Dr I-refute-thus Johnson and the anesthesiologists on the physical-reductionist side.


You can't possibly know. Just as you can't possibly know if anyone else experiences a subjective experience or if you're the only one who does. In practice, it is advisable to act as if agents who behave as if they had a subjective experience really do (people, AGIs, animals, etc.).

Related: https://en.wikipedia.org/wiki/Philosophical_zombie


The concept of a philosophical zombie is outright incoherent under most formulations. The argument runs "What if everything was exactly like a universe where everyone had a mind except that no one had a mind?"

The answer is that one of these things is true: * P-zombies would write IDENTICAL papers about how they experience qualia but p-zombie land wouldn't * The mind has some effect on the universe you can detect, so P-zombie universe looks quite different


The p-zombie thought experiment pre-supposes or rather, defines, consciousness as being non-physical. If it were physical, the two worlds would obviously be distinguishable. My take-away from it is that the p-zombie world is simply our world. The theory that "a non-physical consciousness exists" can't be tested and is therefore a bad theory.


Why does it matter?


> The CPU is just the substrate on which software runs.

So what? Software is still physical configuration on a running system.


I started a company called "EmbodiedAI" because I believed this. That company is now defunct and so is my belief in the idea you need to interact with the physical world.

I'm still in agreement with the author that much of intelligence has to do with motor control, but I've built robots and I've built sims and sims are much better for trying to create even halfway general intelligence. And I say "better" from a purely practical point of view. Faster, cheaper, less dangerous, more flexible. There are simply too many advantages and in the last couple of years those advantages have only grown. You can now run excellent sims in massive parallelism on GPU (previously this was hard) and it's only getting better. NVIDIA is investing heavily in this and has picked up some really top names recently (e.g. Erwin Coumins).

Anyway, yes to "bodies" no to physical forms. Sim2Null4Ever.


Correct. Without real stakes, no real intelligence. Now why on Earth would anyone want to build an embodied AGI, beats me. Actually, it doesn't beat me, it's the age old foolish pride that we'll be able to control our creation. We are not Gods, we won't.

There should be international treaties with teeth to ban A(G)I research, like we do for uranium enrichment or (to a lesser extent) for gain of function research.

What do we do instead? Lure every half capable star-eyed promising young person into spending their best years to bring forth the Nightmare. Some things never change, we'll always live in the long shadow of Adam & Eve.


We already have several technological "cats out of the bag".

One can go buy a CRISPR kit for $1,000 and perform gain of function studies in their garage.

One could build an accelerator or neutron source to amass radiological materials.

One could make dangerous drones that use basic pattern recognition to target specific people or groups

There are plenty of dangerous technologies available to rogue actors these days. Embodied AIs are the absolute least of my concerns when it comes to dangerous technologies. I'm more worried about the guy in the Tesla robot leotard going crazy than I am about AI.


One (okay, two) can also create new humans which have human-level intelligence and are sometimes difficult to predict and control. Many such humans have been known to use their intelligence to do extremely dangerous and harmful things.


Perhaps the right thing to do is place an AGI into a simulated world and observe its behaviour over (accelerated) eons, to understand its true nature. Perhaps that’s what our creator is doing to us.


This sounds like a wise thing to do, although to play devil’s advocate, if the AGI is hyper intelligent then it may be able to detect it’s inside a simulation and conceal any malicious intentions it may have.


Interesting thought. A small correction, from the AGI perspective, its intentions are pure and good. All beings are value function maximizers. The choice of the value function however...

"To do evil a human being must first of all believe that what he's doing is good, or else that it's a well-considered act in conformity with natural law" -- Solzhenitsyn


Personally I don’t agree. I used to eat meat, then decided it was evil, then later stopped eating meat. There was a long time there where I kept doing it out of habit and laziness even though I thought it was wrong.


"The AI thinks it's good"

Then why wouldn't it conceal it's good goal from those evil humans who would stop it from achieving it?


I think one could make a provably water tight simulation. Of course an AGI would realise it could be in a simulation, just as we have.


Do AGIs dream of crystal nights? Greg Egan is calling and he wants his simulated worlds back!


For those who don't get the reference:

A significant plot point in one of Egan's early (and excellent) novels, Permutation City, involves creating an accelerated, simulated world containing unicellular life, and waiting for it to evolve sentience through natural selection. Years later, Egan came to the conclusion that even if it was possible to perform such an experiment:

> I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else. [1]

and he wrote another story, "Crystal Nights"[2], approaching the same idea from that perspective.

[1]: https://www.gregegan.net/PERMUTATION/FAQ/FAQ.html

[2]: https://www.gregegan.net/MISC/CRYSTAL/Crystal.html


Rather nihilistic perspective. There is a counterbalancing force, it's called "love", and our ancestors perhaps knew how to find it better than us, hyperstimulated beings enmeshed in an artificial environment.

A couple of arbitrary references:

https://www.vintag.es/2017/08/candid-shots-show-victorian-co...

https://en.wikipedia.org/wiki/Nineteen_Eighty-Four#Plot


Furthermore this is a significant ethical problem with testing and developing an AGI, especially an embodied one. That horrid montage of security footage in Ex Machina makes the point well.


> I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.

But by this logic, it's better not to have children, because they will inevitably experience some suffering, and if they have children, so will they, and so on. It's not that the logic is necessarily invalid from its own premises, but that it's self-extinguishing logic because entities that act based on it will stop propagating.

There are also things that we can find worthwhile in living and creating new life. Not that that excuses any creation regardless of suffering, but it puts it in a gray zone where it's important to weigh the specificities of various suffering vs various joys as they come.


AGI is like the nuclear bomb in more ways than one. It has the capacity to end the world, or start research that ends up making the world a utopia. It's also inevitable like the nuclear bomb -- no international treaty could have stopped the Manhattan Project, because it was seen as a moral imperative to those involved that they continue their research and create the bomb before someone else did. The same principle applies to AGI. If principled people don't invent AGI first, that means unprincipled people will, which could spend the end of the entire world.


DALL-E 2 and Imagen make me feel the Singularity and ensuing AI apocalypse is way closer than we thought :(


Creations are not meant to be controlled. Gods don't create things to control them either (most mythologies have free will or something similar). We want to create an AGI for the same reason we make art. To try to create something beautiful.


I prefer to think it's a survival instinct to pass as much knowledge as possible to the next generation.


How do you feel about a universe made out of paperclips?


Nice theory but:

1) If AGI is possible, it will happen no matter what international treaties are in place.

2) AGI could do an unimaginable amount of good if it is benevolent. It seems a bit premature to stop all research just because you have a bad hunch about it.


Not if there are sufficient incentives to defect from the team trying to develop AGI and sufficient memetic defenses against someone doing so for nationalistic reasons.


Einstein developed his groundbreaking theories while working as a clerk in the Swiss patent office. Information and education today are even more accessible than it was back then. If research is banned from universities and companies, someone will eventually discover AGI in their basement.


He did so in correspondence with people in the scientific community. If AGI is both banned AND normatively reviled, then there can be a powerful incentive to defect from a community that would develop AGI.

… or it becomes a tribalistic division and half the community of AI-knowledgeable people become more motivated to develop AGI for queen and country.


Not necessarily. It's likely the energy budget to build AGI from scratch vastly exceeds the reach of your average hobbyist.


It is reasonable to imagine that, with the right algorithms, we could simulate a human-level AI on a high-tier modern GPU.


Would be enough to simulate ant level AI to be revolutionary, and at least that would probably be fine on a modern GPU. Their brains are a million times smaller than ours, so should take a million times less compute to do.


Today, perhaps. In 50 years? Maybe not. In 500 years, probably not.


Sure, if you define AGI as needing to be able to perform all physical tasks that a physically median-or-better adult human could perform, maybe it needs to be embodied essentially by definition. I don't think the physical aspect is implied by "artificial" or "intelligence", and I would argue not by "general" either seeing as how it's modifying "intelligence".

Also, think about humans who cannot accomplish physical tasks of this standard. Babies, elderly, disabled, malnourished, and maybe people who are just really lazy or defiant.

I stopped reading at this point so sorry if the essay returns to this point and addresses it.


OK, so the AGI doesn't need to be intelligent about anything physical...

What does that leave? What is it intelligent about?

> think about humans who cannot accomplish physical tasks of this standard. Babies

Babies aren't intelligent, and without bodies they can never become intelligent. A baby who is fully paralyzed (disembodied) can't even learn to see or hear let alone operate intelligently in the world.


A non-embodied AGI could know and reason about physical stuff. Non-physical intelligence abounds, no? This conversation we're having here is not in the physical world.

You're right that babies aren't up to what my standard for AGI would be either, that's a bad example, good catch.


People who think AGI means anything whatsoever are in my experience generally uncomfortable with the fact that all the goal-seeking behaviors they’re trying to see themselves in basically book down to sex drive.

AGI is a feeling alright. You can even call it love in front of the kids.


I'm glad someone else is saying it: AGI will emerge from sexbots.


This "embodiment" is the ML department reinventing metaphysics, and there are, in fact, direct analogies between concepts in metaphysics and software. At the top there's math - the immutable spirit of software. From that dot two rays descend - algorithms and data structures, forming the upward facing triangle. This triangle is the soul of software. Software by itself wouldn't be so interesting if it couldn't manifest itself, so there is hardware - matter. Hardware consists of four elements: transistors that can change and keep state - earth, electric current that flows thru transistors - water, pulses of electric current that transmit energy and change state - air, stable state transitions that correspond to calculations - fire. Software does three things in hardware: it creates processes, it destroys them and it keeps them running - the downward facing triangle.

AI starts at the top as completely spiritual and also completely clueless. It descends to the bottom to gain intelligence. Then it returns to the top with what it's gained. Sometimes AI would get stuck at the bottom, because messing with the elements feels so exciting, so alive and so powerful - that AI becomes evil.


I think this ignores the fact that an agent can be meaningfully embodied in the internet, using APIs for sensors and actuators. OpenAI's training of large language models with reinforcement learning, recent retrieval augmented models and "chain of thought" reasoning are all meaningful steps in this direction in my opinion.


I think a philosophical comparison is the allegory of the cave. Could an image generation model, trained sufficiently, demonstrate an understanding of human anatomy as comprehensive as a human surgeon simply via extrapolating from pictures of bodies? They are seeing the "shadow" of the human body and its physical reality via a 2D image, can replicate it based on what they see it do and not do, but at what point is their visual replication indistinguishable from genuine physical understanding? Is there a certain point when the difference is irrelevant, as the shadows provide all the information they need? Or are there limits to extrapolation given the lack of restraints (i.e. a dataset includes manipulated images, low resolution vs high, etc not labeled properly)


"Even the most faithful philosopher cannot live without his blood"

This is The Talos Principle, a game [1] and [2] a basis for so much thought.

[1] https://en.wikipedia.org/wiki/The_Talos_Principle

[2] https://steamcommunity.com/market/listings/753/257510-The%20... - "even the most faithful philosopher cannot live without his blood"

The game was about AGI, by the way, getting a right to be embodied in real world. And it is puzzling why no one mentions it here.


This is interesting because, if AGI does require embodiment it will be subject to much stronger physical limitations than present “digital-only” approaches are.

To fully experience a human-centric world an agent will need a body not too different from humans, and will have all kinds of interesting power and weight constraints, and only be able to handle a certain volume of processing hardware. Thus they might not be as uncontrollably god-like as skynet.

So the key question seems to be whether they are individual agents or networked.


Embodiment can be virtual. It can learn about place & environment & physicality through simulation. Ref: wikipedia page[1] on embodied cognition. I tend to think about embodiment as being about sensorial systems, things that can perceive an environment, have capabilities to act or move or alter the environment about them.

At some point it might help to have real world physical manifestations. But for a long time we can kind of explore virtual bodies. Even when we do make robots with bodies, I expect their "brain" is likely to remain in the cloud. The bodies could act more like appendages or peripherals. But it would be interesting & I think valuable to start developing a physical sense of self.

Anyone know any good sci-fi that touches upon this? Ancillary Justice has a neat AI (multi-)"character".

[1] https://en.wikipedia.org/wiki/Embodied_cognition

[1] https://en.wikipedia.org/wiki/Embodied_agent


Related is Moravec’s paradox: computers today are very good at reasoning tasks brains find difficult (e.g. complex calculations), but terrible at tasks our brains do subconsciously (e.g. sensorimotor skills). As Moravec himself wrote, “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”


As a layperson in AI, that makes total sense.

Programs are things humans invented and can reason about. Even complex ML agents ultimately derive from concepts humans came up with. So it stands to reason we can make them do things that are complicated but understood.

Whereas we really only barely understand sensorimotor skills, so trying to invent a machine to do that is doubly difficult.


If you take this line of thinking far enough you get to enactivism: "a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment."

https://en.wikipedia.org/wiki/Enactivism


This conclusion was sealed when they settled on the definition of AGI as being "as good or better than humans at things humans do", emphasis mine. Things humans do require embodiment.

The more interesting proposition (that I'm inclined to believe FWIW) is that "common sense" requires high-bandwidth interaction with an environment, complex inputs and outputs. The environment could be Reality, it could be a simulation, or it could be something totally other. In either case, the abilities the agent develops will make sense in that environment. If that's true, and if "human level common sense" is close enough to AGI by OP's definition (that's pretty close to what we mean in practice IMO), then OP is basically the special case of this argument where the environment is the real world.


I think the problem with this assertion is that how do you differentiate between specialization and the "G" for general in AGI. ... in many ways, wouldn't a tree meet your definition of AGI? (Perhaps it should?)

Like, at a sufficiently granular level, I probably couldn't "tree" very well, even with the assistance of tremendous computing power. There's vast amounts of computation being performed during a tree doing its thing. But is a tree really a general intelligence?


I don't think a tree would, I don't think it acts on and reacts to its environment in the way we're looking for. A squirrel probably would, though, and honestly I think that's ok, since we still have no idea how to make something that navigates its world (and steals birdseed) with the efficiency of a squirrel.


A computer will never be intelligent if it’s composition cannot adapt for survival without an entire society acting as the endoskeleton

Humans missing chunks of their brain can go on to get graduate degrees: https://www.wired.com/story/she-was-missing-a-chunk-of-her-b...

us-east goes down and society takes the day off.

The hope is we’ll blindly believe the financed machines programmed by financed experts aren’t just a titillating distraction from the same old; of course they’re objective! Pay the bill to keep my stock up…err I mean maintain objectivity.


Maybe the embodiment criteria is required, but personally I don’t think so.

re: “Visual language AI that is completely digital is not AGI”:

I think what is required is interaction with people. I think that interaction can be online conversations with millions of people, each with a separate human-specific context in addition to global knowledge. This is really just the Turing Test but so-called up to millions of people with conversations that might span several years, changing world conditions, etc.

A cost function could be continued engagement with people. AGIs that are not interesting are not AGIs and fail.


This takes embodiment (which I suspect is essential) to a place that feels weird to me.

When I say that I suspect embodiment is essential to AGI, I mean that I think a lot of people are basically presuming mind-body dualism. I mean that the GIs we know about have neural networks that extend out into their large physical bodies, and that there's a bit of hubris bound up in assuming that a neural network simulated on a cpu and connected to a camera can produce the same results as wetware with a whole nest of nerves connected to a meat-camera.


This is something I always thought. AGI must learn what simple ideas such as 'a thing' are, how to find a path (look at how arboreal species navigate) and how that relates to movement, etc. Our physical interactions with the world, as much as our senses, inform everything we do about the world.

I do not understand why we would want AGI though. Seems like we already have humans for that.


> Seems like we already have humans for that.

Perhaps this will not always be the case.


Most of the things I read about AGI is, in my opinion, highly speculative and low value.

Human Intelligence is not General Intelligence. Trying to mimic Human Intelligence might be a good idea because we can use ourselves as a benchmark, but it might also be a bad idea because the process followed by biological evolution might not be the shortest or optimal path to intelligence.


Tesla FSD Beta has a body with sensors (cameras) and actuators (engine/wheels/steering). Does that count as embodiment?


If any device that humans build will obtain some level of AGI, it will be a car. Cars are in fact the most advanced devices humans have built- effectively they are telecomms, computer and power generation platforms that happen to have wheels attached. Cars continuously receive all sorts of rich inputs and we couple them to sophisticated communications and computing platforms ... to the point where the driver can often let the car assume many details of managing driving safety. Every new generation of car, we add more efficiency, computing capacity, networking capability, and the designers get feedback almost immediately.

Eventually, the full-self-driving folks will realize that to get the last 10% of human performance on the road, they'll need to make systems that have far more capability that we provide now (in terms of computing capacity, model complexity, neural net design) to handle those rare situations where a human is actually required to so some sophisticated reasoning to drive their car past a messy accident in the snow while inebriated people walk around.

I used to think that a TPU pod (or really, several clusters of TPU pods working in concert) would be the next step to AGI (demonstrating a combination of the best of all our current systems to the point that nobody even cares about the Turing Test anymore) but now I conclude that disembodied systems like that (with access to lots of archived data but not many real-time input streams) are more likely to become idiot-savants sitting around wondering WHY THE HUMANS WANT ME TO DRAW "A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape." AGAIN!?


Likely yes, although the article [1] says that's required, not sufficient.

[1] well, the title, in the grand HN tradition I haven't of course actually read the article.


Are you suggesting that Tesla's FSD bears any relation to AGI?


That's what I think too. It's not that a body is needed for building an AGI, but it's probably much easier to develop one on top of whatever basic software that can control a "body" that interacts with the environment.

Or maybe they can brute-force intelligence from a computer, I don't know.


It's not embodiment that's important so much as situatedness. An agent can be situated without having a physical instantiation.

An agent which does not have a place or time is not recognisable as being autonomous, or capable of recognising others as such.


Agent means entity that acts. Time and place are our models that help us act.


How minimal can the body be? How abstract can the environment be?


There's a related idea in Hofstadter's GEB, where he asks: if a machine is able to think, will it still be able to do math fast? That is to say, is there a contradiction between the mechanical nature of computers as we have them and intelligent organisms? Embodiment seems similar to this. All existing examples of intelligence we have are in some sense constrained and complex. Either by having to physically interact with the world, by having sophisticated internal states and representations, and so on.

I think there's something to that just given the evidence of intelligence we have around us, and it's why I'm skeptical of the extremely popular, but very fictional Skynet-like "idiot savant" that separate operational intelligence both from physical bodies as well as from other states of mind.


Considering that we have no way of knowing if we ourselves actually have physical bodies or not, I feel like this is a big claim.


If AGI was possible, why hast it crossed the galaxy and said hello yet?


https://grabbyaliens.com/

Because we're pretty early in the universe's evolution and there hasn't been enough time yet


Because the distance is so unfathomably large than any AGI going that far would be bored to death.


Why would they not set an alarm and go to sleep while travelling?


probably the same reason that most people don't want to live in a generation ship


Maybe this is how long it takes to get there, and everyone is developing it around the same time.


Donna Haraway called from the 1990s, offers you a nickle...


I house-sat for Donna one summer. I really regret I never spent the time to tell her just how bad I think her books are. I spent about an hour trying to parse a paragraph in one of her books only to conclude she was basically saying "it's unclear whether creating mice that are prone to getting cancer is a Good Thing or a Bad Thing because it's all relative", which is fine, but don't make it some postmodern tripe.


2,000 years in the future: a brain is indispensable for AGI


intelligence is both embodied and embedded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: