Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Transparency is a consequence of how light interacts with objects. There is no "transparent gold". And being transparent is not something we can program gold to do.

Your remark made me research this, and apparently transparent gold exists, we can create it, and you can buy it [1]. The trick appears to be making the gold thin enough.

> Line up everything in the known universe that is conscious and you will find that it engages in a highly specific biochemical activity that requires a whole set of highly specific chemical interactions.

Line up everything in the known universe that engages in this highly specific biochemical activity, and you'll find a whole lot of things that most people wouldn't call "conscious". So the biochemical activity can't be enough on it's own to create "consciousness". If you compare those "unconscious" organisms with "conscious" ones, you'll find that that the "conscious" organisms have a large amount of cells with this biochemical activity arranged in an intricate network structure.

Now why do you think it's not this structure that creates "consciousness"? Why do the details of the biochemical activity matter beyond being a particular physical realization of some differential equations? Why would a machine made out of silicon, when it precisely emulates the behavior of that biochemical activity, be unable to replicate it's large-scale properties (including "consciousness")?

[1] http://www.reynardcorp.com/products/optical-components/coati...



Sure, I meant we cannot impart the property of transparency to a gold bar by "programming" it. Programming isnt a magical spell that can rewrite the causal interactions that take place in the universe.

> and you'll find a whole lot of things that most people wouldn't call "conscious"

I disagree. I cannot think of anything with a nervous system engaging in the particular neurochemical reaction I'm talking about not being conscious. No example comes to mind?

I dont mean any old reaction. As, the digestion of wheat. I mean the specific kind that define neurological systems.

> why do the details of the biochemical activity matter beyond being a particular physical realization of some differential equations

Because the "details" we're talking about are the causal effects.

> precisely emulates the behavior of that biochemical activity

For the same reason you cannot programme a gold bar into transparency. Or program a table into an elephant.

Or to put it another way, a program which "precisely emulates the behavior of gold" doesnt turn machine into gold.

A program which "precisely emulates the behavior of digestion" does not digest pizza.

A program which "precisely emulates the behavior of" consciousness isnt conscious.

By "emulation" you mean, "imitation in form". Since it is only the FORMS the program and the nervous system share (or, gold, whatever). Ie., that they can be both described to an extreme level of abstraction in the same way. But the universe isnt abstract. It isnt a form.

A program simulating a gold bar may be an instance of some equation that a gold bar is -- but possesses none of the properties that make it gold.

A machine which imitates some highly abstract equational description of thought is as close to thinking as a bird is to an aeroplane. The bird's heart will burn as much jet fuel as your machine will think.

The universe isnt taking place in the abstract, it's taking place in the concrete. You have no soul. Your mind is not a program. You consciousness is not ideal. It isnt a number. A pattern, a structure, an equation. These are descriptions. Your mind is something your body is doing -- as so it is for every known thing that has a mind.

To speak as if the mind could be abstracted enough to a description that may be realized in silicon is to believe in an almost magical power of electric.


> Or to put it another way, a program which "precisely emulates the behavior of gold" doesnt turn machine into gold.

> A program which "precisely emulates the behavior of digestion" does not digest pizza.

> A program which "precisely emulates the behavior of" consciousness isnt conscious.

If my goal is probing the behavior of gold, I don't care if it's real or emulated.

If my goal is having a list of chemicals pizza turns into, I don't care if the digestion is real or emulated.

If my goal is having a conversation, I don't care if they're a p-zombie.

Tell me if I'm missing anything here:

Either a computer can perfectly imitate a neuron's behavior, or it can't.

If it can, then a giant computer should be able to make either a conscious being or a P-zombie. (Do you make a distinction there? If you do, can you justify it?)

If you say it can't, then I accuse you of magical thinking. There is no evidence of physical interactions that cannot be emulated.

> To speak as if the mind could be abstracted enough to a description that may be realized in silicon is to believe in an almost magical power of electric.

Here's the thing. The idea of emulating a person does not depend on abstracting the mind. The 'proof of concept' is just building a computer so big that you emulate an entire nervous system, fully intact, every single atom.


A simulation of digestion, does not digest pizza.

> Either a computer can perfectly imitate digestion, or it can't.

Yeah, it cant.

What you seem to think is that "imitation in form" means being the same as. A photograph of me is imitation in the form of me, it isnt me. It doenst have a pulse.

The universe is not an idealised abstract world of Forms, and so anything with the same abstract pattern is equivalent.

Abstract patterns exist only form our point of view. They are our way of describing things.

To say a computer imitates anything is only to say that when we look at it, we can use it to inform ourselves. Simulating digestion is a means of learning about digestion because we design it to correlate across systems.

Without a point-of-view to impose correspondence, the two systems are a like in no respect. "Being a simulation of" is like Beauty. Its not a process in the universe, its not a property of objects.

A piece of silicon with an oscillating electric field across it remains the same regardless of whether it is running the "digestion simulation" or the "consciousness simulation". It is only a matter of electrical frequencies and amplitudes.

This acquires meaning to us when we look at them. In the case of digestion one 01010 pattern means "too acidic", in the case of a lifelike chat bot "01010" means "hi!".

The cinema screen is not a train. It's a movie. Its just light hitting a canvas. It wont hurt you. 01010 isnt acid. It isnt thought. It's just a voltage fluctuation, it cant talk to you.


> The cinema screen is not a train. It's a movie.

Ooh, this is a good example!

When I watch a movie, I don't care if the camera was pointed at real events, or if it was CGI.

I care about the actual movie experience. It's the same either way.

When I have a telephone call with someone, I don't care if the microphone is pointed at flesh. I care about the conversation.

> What you seem to think is that "imitation in form" means being the same as. A photograph of me is imitation in the form of me, it isnt me. It doenst have a pulse.

Actually my thought process is not that at all. My goal is to create an identical photograph with my computer. Or more specifically, 60 of them per second, plus audio.

Do you think that's possible or impossible?

If it's possible, then I have something that claims to be real, that claims to feel, that responds as intelligently as a human in every way... That's good enough for a lot of people to call it conscious. It's definitely good enough to drive a car, or translate a book, or fill out a captcha, or do whatever "AGI" is supposed to do.

If it's not possible, then why? If everything is a purely physical process, can't a computer can use math to figure out where atoms would go, and where photons would bounce?


> When I watch a movie, I don't care if the camera was pointed at real events, or if it was CGI.

We call something conscious because it actually thinks, not because it can fool human beings into believing it does.

The foolishness of a human audience is not the measure of all things. Whether neo exists or not matters.

We have moral obligations to thinking things, we have none to non-thinking things. etc. We have none to fictional things.


What evidence do you have that you are conscious?

What if your neurons are running a biological program that merely emulates thinking, and fools humans into believing that it thinks?

But all of that is philosophy, right?

In terms of external behavior, you're admitting this machine could match a human?

Because that's enough for AGI. You don't need "genuine consciousness".


They are not "running a biological program". Acid melting metal isn't a "program" it's a chemical reaction.

Consciousness is what I am doing. That's my definition of the term. It's incoherent to say "what's my evidence for it".

> In terms of external behavior, you're admitting this machine could match a human?

No. I am only saying that a stupid human being using only stupid human faculties of speaking to something might be fooled, by their stupidity, into thinking they were speaking to something as equally stupid as their are.

A dog which observes a toy dog walking around may bark at it as-if it were another dog. But that toy dog isnt. it doesnt think, it has no emotion, experiences no pain. has no goals, no interests, no skills, etc.

No piece of silicon will every amount to more than a toy dog. Neuroscientists will not be fooled by it. Incraesingly sophisticated tests of cognition and goal-directed behaviour won't be fooled by it.

People on the street might be. But their foolishness is no guide to the temperature of the sun, the cause of the tides, nor whether the toy dog thinks.,


So you think no silicon computer could ever correctly calculate the way atoms move, and photons bounce?

Because that's the only way for it to fail cognition tests! It has to calculate different photons than you'd get from a real body.

And I feel like the idea that some physical processes just can't be simulated is magical thinking.


Does simulating a gold bar turn the simulation into gold?

I feel im going round in circles here. A description of a system, even if that description is "active" (ie., it evolves over time)... is not the same as the system.

I can take a photograph of a mountain. My hand has no mountain in it, but a photograph.

I can describe any system with mathematics. I can give every atom in your body a number for all of its properties, and a location with respect to the centre of the earth and then append all those numbers together to form a massive number X.

Over time because your position and properties change the number which describes you X2 differs from X.

If I start by writing X down, then X2, then X3, etc. then the ink on the page describes you as a changing system.

The ink isnt you. It isnt alive. It's ink.

That is all a machine is. We have replaced ink with electricity and paper with silicon, but it is only correlated with a changing number.

WE perform the translation from this number to something else in the world. The person who reads the ink on the paper is the person who knows what X becoming X2 means as it pertains to you. The paper and ink do not mean anything alone, they are not you and share none of your properties.

When a chat bot says "hi" it is just a change to an LCD screen caused by an electrical fluctuation caused by a prior electrical fluctuation. It is just a new line of ink on the page.

There is nothing thinking, saying, speaking, understanding, "hi". It is just a change to some liquid crystal which fools you as you read it. YOU impart meaning on it when you look at it. It isnt a person, it's just ink.

The most ambitious we can be about machines is that they fool people into thinking they are having a conversation with someone. Ie., that paper shreds with ink phrases on them tumble out of a folder in just-the-right-order as-if a conversation was happening. But it's a tumbling of electric, there is no conversation.


> If I start by writing X down, then X2, then X3, etc. then the ink on the page describes you as a changing system.

> The ink isnt you. It isnt alive. It's ink.

> That is all a machine is. We have replaced ink with electricity and paper with silicon, but it is only correlated with a changing number.

Okay, good. Now here is the important part.

I am not making the claim that this machine is a real person.

I am making the claim that if you video-call the machine, it will produce lights and sounds that are identical to video calling a real person.

If the simulation is accurate, isn't that a straightforward consequence?

You keep saying that the machine won't be real AND that it will fail sophisticated cognition tests.

Those are not the same issue.

I'm not arguing that it's real. I'm arguing that it will pass every cognition test you can throw at it.

Is there a flew in my very short chain of logic?


There are tests to distinguish gold from silicon. And so there will be for however your machine works. I cannot say now what those test will be, any more than you can make such a machine.

Something not being the thing it is simulating entails there will be tests to distinguish it.

The tests cannot be "cognative" because there is no cognition going on.

A simulation of an animal does not live in the world, experience and understand it -- nor render the world intelligible to itself and others. A simulation of an animal is just a current flowing around a wire.

I would guess the tests will always, actually, be quite simple -- perhaps not expressible as a single conversation -- because somehow like a chess game it has every infinite sequence of conversations available to it.

Perhaps, in a novel social situation adapting to evolving minds as they respond to each other. Suppose this is recorded today: https://www.youtube.com/watch?v=99IitWYZ0aU#t=60s

and your machine is sat in the audience.

The question is then: "What does Kenneth think about the audience's laughter?"


> There are tests to distinguish gold from silicon. And so there will be for however your machine works.

Unless something was coded wrong, gold inside the simulation will give the same numbers as gold outside, so you can't tell which is which.

Let me be clear here. You don't touch the machine directly, you get the numbers the simulation spits out. The 'proof of concept' interaction is that you are on a video call with the atoms and photons inside the simulation. The video looks like a person, and your job is finding any difference between it and a video call with a real person.

> Something not being the thing it is simulating entails there will be tests to distinguish it.

That implies that the atoms and photons are being simulated incorrectly. Are you saying it's not possible to have an accurate simulation of atoms and photons?

> A simulation of an animal does not live in the world

The simulated atoms interact with a simulated cage, so they should give the same results despite being mere electricity.

> I would guess the tests will always, actually, be quite simple -- perhaps not expressible as a single conversation -- because somehow like a chess game it has every infinite sequence of conversations available to it.

> Perhaps, in a novel social situation adapting to evolving minds as they respond to each other.

> The question is then: "What does Kenneth think about the audience's laughter?"

The simulated atoms will change simulated orientation the same way real atoms would. If you look for physical evidence of learning you'll find the same molecules moving the same way. Why would the simulation fail this test?


> You don't touch the machine directly, you get the numbers the simulation spits out.

So you're restricting the use of this machine to a situation designed to fool human beings?

The criterion for general intelligence is actual intelligence, not putting it in ideal conditions and seeing if people are fooled.

> Why would the simulation fail this test?

It seems like you think this machine is going to simulate the entire universe, evolve its model of the universe and therefore perfectly predict its next state -- and on the basis of this prediction provide an answer.

Sure, perhaps I will concede: if you are able to simulate the universe in infinite detail you might be able to perfectly predict its next state.

This isnt any ambition of any one however. And has nothing to do with AI. If the precondition of AI is "a perfect simulation of everything", then that's close enough -- for me -- to call it impossible.

Even so, in this sense however, we do not " simulate gold " . What it means to "simulate gold" is to take some small number of aspects, model them with equations, and run those equations.

A video game that allowed you to perform mass-spectrometry on any possible compound, along with everything else you could possibly do to everything -- would I think, be a video game which requires a whole other universe to exist.

And so, it seems your argument is that "when scientists can model the universe in infinite detail so as to perfectly predict its next state, we will have AI!" (and, as far as the quantum state of brains go, it's close enough to inifinity to model all of that)..

OK, sure. I don't know how scientists are going to build a universe simulator without a "theory of everything" and how, even with such a theory, a machine can predict the next-state of a large system in sub-infinite times. Processing merely some particle collisions in the LHC takes months.

I cannot see how a machine is going to actually track the evolving entangled state of an audience of human beings.

"Infinitely precise information about the universe" I think actually requires you to actually be the universe. That's maybe a speculation however, but I would be surprised if the universe could be described in less volume than it occupies. And if an infinity of precision (ie., perfect parity in every simulated result) is actually possible without the target system.

A machine can only simulate what is known. The actual behaviour of the universe is much larger than what is known. As soon as we discover something new, then we have a test to prove the machine is a machine.


> The criterion for general intelligence is actual intelligence, not putting it in ideal conditions and seeing if people are fooled.

Having to do it across a wire is "ideal conditions"?

The initial comment was about "artificial general intelligence". Every single one of those problems can be done across a wire.

Every word that has ever been spoken, every gesture that has ever been made, you can do across a wire.

It's good enough to solve any practical problem in the world. It just won't be "real".

> It seems like you think this machine is going to simulate the entire universe, evolve its model of the universe and therefore perfectly predict its next state -- and on the basis of this prediction provide an answer.

No, it's going to simulate a tiny cubicle with a person inside.

I guess you could call it a simulated universe, but the universe is only two cubic meters.

> I cannot see how a machine is going to actually track the evolving entangled state of an audience of human beings.

Put cameras in the seat in the theater. One copy of the feed goes to a real person, the other goes to the machine. Both can track the evolving state of the audience fine. Neither one should be expected to perfectly simulate the rest of the audience.

> I would be surprised if the universe could be described in less volume than it occupies.

Don't worry, I don't expect the machine to be smaller than two cubic meters!

> A machine can only simulate what is known. The actual behaviour of the universe is much larger than what is known. As soon as we discover something new, then we have a test to prove the machine is a machine.

That's fair. So version 1.0 will have slightly-wrong physics. Do you think that will necessarily make the simulation go awry? Do you think we'll never know enough about physics to simulate a small box with a person in it?


> Do you think we'll never know enough about physics to simulate a small box with a person in it?

In the sense you mean simulation, ie., describe with all required detail some system -- we can barely simulate a few atoms let alone a room with a human being in it.

Im not sure this is even a question of knowing the physics. The problem is that even an atom has an infinite density of "descriptive information"... ie., in order to describe in toto we would be calculating forever.

This is not what any one in AI is even trying to do by the way. This isnt machine learning. This isn't AI.

I'm not convinced simulation in this depth will ever be achieved, I cannot imagine it could ever be performant. Every single causal interaction taking place over a second is an entire universe in itself. To have this second alone described in simulation is a vast undertaking, let alone a conversation.

Maybe I would agree that while this system would be "good enough", if it could predict an appropriate response by simulating a target human being to this depth... all the way down to how dopamine bonds to receptors in the frontal lobe, etc. -- then sure, I could see that it would be close enough.

However this isnt what anyone means when we say something is "simulated". They mean that it a single aspect alone is idealised into a single equation and treated under perfect conditions without any other factor being relevant, and then a calculation involving this equation is run.

People in AI are not even considering animal consciousness as being a relevant thing to simulate (even though that's what consciousness is). They think it is just a matter of some idealized formal structure.

If they realised that it would require an electronic system to calculate every descriptive quantity regarding every particle of some animal, computational general-AI research projects would be binned for the next millennia at least.

In the case of AI, no one is trying to "simulate a human being" in the sense you describe. They are trying to find an extremely simplified highly idealized equation to describe thinking.

They are trying to model intelligence as if the salient features of animal consciousness were not biological but equational. "Good Bye" follows "Hello" because insert program...

No, "Good Bye" follows "Hello" because people who speak english have lived a life of speaking it in which experiences have been acquired in response to the world ie., their brains have developed under sociolinguistic conditions: with light and sound bouncing off their bodies and the bodies of those around them such that their neurological structure as evolved to causally associate "hello" with akind of social circumstane and "goodbye" with likewise.

There is nothing apart from this connected social-neurological system that constitutes why "goodbye" follows "hello". That is how it comes to be. Any rule or system which appeals to an equation that isn't modelling this entire process to its full depth is just "accidentally correlated" with english-- and will be trivially easy to expose.

And so on for every aspect of consciousness.


>> and you'll find a whole lot of things that most people wouldn't call "conscious"

> I disagree. I cannot think of anything with a nervous system engaging in the particular neurochemical reaction I'm talking about not being conscious. No example comes to mind?

Is a jellyfish conscious? Does it have the particular neurochemical reaction you are talking about?

> A machine which imitates some highly abstract equational description of thought is as close to thinking as a bird is to an aeroplane. The bird's heart will burn as much jet fuel as your machine will think.

This is actually a good analogy for our disagreement. Your definition of "thought" seems to inherently depend on the implementing substrate; and if it doesn't burn jet fuel, a bird doesn't really "fly".

But for me the substrate is irrelevant; I don't care whether a machine "really thinks", so long as it can solve any problem which I might have to "think" about otherwise.


> I don't care whether a machine "really thinks", so long as it can solve any problem which I might have to "think" about otherwise.

OK, well then your calculator satisfies your definition of "thinking".

I'm concerned to know whether a machine is doing what my dog is or I am. And mostly when people become hysterical or tedtalky (which is the same thing most of the time) about AI they are presenting an "I, Robot" future where androids dream of electric sheep.

> and if it doesn't burn jet fuel, a bird doesn't really "fly"

When I think, "I'd like my pen" and subsequently my arm moves to get my pen, my thinking is causally connected to my arm moving. My arm moving is some chemical my muscles do, in order to be connected at all with my thinking, my thinking has to be something broadly chemical too.

The plane doesn't move air out of the way because its flying. It does that because its burning fuel (etc.). "Flying" as a description of what the bird and the aeroplane are both doing isnt actually any physical process at all. Is a pattern they both very abstractly follow that we have invented. In this sense nothing in the universe actually flies: the bird does its thing, the aeroplane does its thing --- and from our point of view, they are both abstractly similar.

IT's our pov which makes them similar though. The airplane isnt distressed to burn too much. The bird is.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: