Hacker News new | past | comments | ask | show | jobs | submit login
The Singularity Summit 2007: AI and the Future of Humanity - Sept 8 & 9 (singinst.org)
8 points by jey on Aug 27, 2007 | hide | past | favorite | 18 comments




I went last year. Lots of fun. Cory Doctorow was probably the best speaker. I hadn't read BoingBoing much before that, and now I'm a rapid fan.

I think the discussion with Luddites was fairly useless. It shows a fair amount of due diligence, where dissenting voices are given a podium far larger than the they "deserve" given the popularity of their opinions. I did enjoy that the Luddite gave a talk through teleconference.

The format last year was pretty bad at times. The Open-Mic questions wasted lots of time. Written questions were submitted but not used. There were no break-off sessions.

Also, the moderator, Peter Thiel, is as boring as watching paint dry, and didn't actually do any reasonable moderation of the discussion.


Prediction: Strong AI will not exist until human consciousnesses can be copied to computers.

http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.ht...


Searle is old hat. He's just playing word games. He assumes that "really understanding" something means something deep that he has left unspecified. The beautiful irony of his argument is that Searle's brain itself is a freakin' Chinese Room, and this is basically a Chinese Room claiming that it itself cannot be sentient! :-)

What's so special about human consciousness, anyway? or at least please provide some sort of rationale for your claim.

Here's a kinda long talk that explains why Strong AI is feasible, skip over all the introductions at the beginning: http://ia310111.us.archive.org/2/items/FutureSalon_02_2006/ Same thing in Google Video format (lower quality): http://video.google.com/videoplay?docid=-821191370462819511


I probably shouldn't have posted that link because it doesn't provide any evidence for my claim. It should be seen as something separate for discussion, and I agree with your refutation of Searle's claims... no one said anything when I posted that paper a week ago. http://news.ycombinator.com/item?id=43996

My belief that consciousness uploads will come before AI doesn't have much of a basis (or any basis) in fact. My intuition tells me that reverse engineering the human brain is easier than making a recursively self-improving AI.

My claim comes because I have the atheist's fear of my own mortality and I have to cling to my faith. I also like telling myself that my ancient, infinitely dimensional consciousness is choosing to live out a life in a computer simulation as a successful entrepreneur living through the peak of American civilization.

So there you go. My claim is completely irrational, which means you can't refute it! :)

I'm going to watch that video now.


Oh ok. :) But as a thought experiment, let's warp back in time 150 years. Some random guy you're talking to at the local pub claims that within 50 years, man will build a heavier-than-air flying machine. Would you intuitively predict that this is going to happen? Or would your intuition suggest that we'd have to wait until we can build a mechanical replica of a bird? What if someone said that within 150 years you'd have a little machine you could hold in your hand that could bring up all of the world's information, and let you talk to anyone on the planet in real time?

Human intuitions are great for hunting, gathering berries, navigating social situations, and other situations present in the ancestral environment, but evolution hasn't tuned us to be able to predict the future with our intuition. Just how bad are the cognitive biases employed by humans? Pretty bad: http://www.singinst.org/Biases.pdf

Anyway, not trying to pick a fight here -- I appreciate your reply above. Hope you enjoy that video.


I know I'm probably just bringing up an issue that has been beaten to death somewhere in a hideously long thread, but consciousness makes me think strong AI is impossible. This is my line of thought:

1. Consciousness is essential to human intelligence, because our great mental advances come from our awareness, and objective description, of our own mind and thought processes.

2. A consciousness is by definition irreplicable, since two different consciousnesses are essentially different.

3, If strong AI is possible (and computational), and there is no essential difference between copies of the same code, then multiple identical copies of the same IA mind can be made.

4. If 1 and 2 hold, then 3 is false. Therefore, strong AI is impossible.

If any of these assumption need elaboration, let me know. And please, by all means, direct me to that hideously long thread instead of answering my argument in detail. I don't like to waste people's time. Much obliged.


Regarding point 2: why can't you replicate consciousness? Of course we can't today, but I don't understand why it is inherently impossible.

Just because my consciousness is different from everyone else on the earth doesn't mean future technology couldn't copy my mind to a computer fifty times over. It will be very controversial when technology progresses far enough to allow consciousness copies.


At least in terms of thought experiments, it makes no sense to me. Think about it like this. If I can copy my consciousness and place it in a different environment, then I should be aware of that other environement at the same time I'm aware of this one. But for this to happen, information has to be transported from both locations to eachother in no time. While pretty awesome if true, the laws of physics don't allow for this, as far as I know.


A copy of your consciousness is just that, a copy. Imagine that the entire body is copied while we're at it.

So then there are two separate copies of you, interacting with the world. There could be 100 copies. They have the same set of initial conditions they are operating under but they won't all see or do the same thing, the different consciousnesses are still products of their environment.


So if I said I was going to give you immortality by copying your consciousness, but you wouldn't be aware, wouldn't you feel gypped?


rms hit on point #2, so I'll respond to point #1:

a) Why should an AI be similar architecturally to a human? The size of mind design space is much larger than the space of possible human minds. There's many more ways to build an intelligent system than to copy a human. Humans were incrementally designed by evolution, and are by no means the only possible design.

b) What's consciousness? Is it real? It seems likely that consciousness is simply an illusion of sorts; after all, isn't it more useful for you to model yourself as being "in control" of your actions? Look up the experiments with split-brain patients; you'll see that one hemisphere will simply dream up explanations for what the other hemisphere is doing! Your brain is a chunk of matter operating according to the rules (physical laws) of our universe. It's a pile of biochemical processes. You understand yourself as "being in control" as a model of your behavior, but really, you're a chunk of matter running according to the Universe's physical laws. (I'm not saying we shouldn't act as if we have "free will"; I'm just saying it may be a meaningless abstraction.)

c) Even if we assume consciousness is some property needed for an intelligent system, why couldn't we implement consciousness in something other than a human? Isn't a human an existence proof that it's possible to implement an intelligent system using just matter?


Cool, thanks for the reply.

Well, if consciousness/free will an illusion, doesn't the notion of illusion require a consciousness to be having the illusion? And why does it make sense to say we have to act as if we can choose to act in order to act, if it's an illusion?

Yeah, I know the arguments in the end tend to boil down to: we're just matter, therefore all intelligence is AI, QED.

But, can we go the other way? AI is logically impossible, therefore we're not just matter. That's why I'm interested in the purely logical case for AI here.

As for whether it is meaningful to say we aren't just matter, it seems so on the surface of things. Most of oure concepts that we use to communicate aren't material, especially if consciousness is an illusion. I.e. there is no such physical thing as 'red.' Yet, we still seem to communicate quite effectively, even better than when restricting ourselves to purely physical terms.


The best summation of Searle's argument I've heard went something like "When you run a simulation of a rain storm, nothing gets wet. So if you run a simulation of a brain, does any 'thinking' happen?"

Of course it's just word games. But it is equally word games to say that a machine 'thinks'. We use thinking to describe certain processes that happen in human brains, so to say that machines 'think' is really just a sort of metaphor.

And if you think this is overly pedantic when discussing the possibility of the Singularity, consider this question. If someone told you that your brain could be 'uploaded' into a computer and run as a program that for all intents and purposes would be 'you' except much smarter and immortal, would you be willing to go through with this process if it were irreversible and your current brain were destroyed as part of the process?

Just what is 'you' anyway?


This is a good, but flawed argument. Intelligence is fundamentally an information processing function. [Simplistically,] your brain receives bits of information from its sensory inputs, crunches on them, and outputs some bits of information. It also stores some bits, to create what we call memory.

The bits don't care if they're processed in a silicon-based computer or a meat-based computer. The computations and transformations are what matter. Again, this "simulations aren't wet" is relying on a hidden assumption of a false distinction created by Searle. He distinguishes between the "real thinking" and "fake thinking", then with that assumption in hand, creates an example in which the "real" vs "fake" distinction is clear.

Here's the real question: When you run a simulation of a Turing machine, does any computation happen?

The answer is obviously "yes".

NOTE: I mean "information" here in the sense of Claude Shannon's Information Theory. I'm not saying that literal 1 and 0 bits like in a computer are used by the human brain. http://en.wikipedia.org/wiki/Information_theory

And I'm really curious: what, in general terms, is so special about brains? Do they contain souls? Do they tap into some mystical forces of Consciousness as in Cartesian dualism? As far as I'm aware, brains are intelligent systems built out of matter. So let's figure out how to build intelligent systems out of matter, and build one.


"Intelligence is fundamentally an information processing function."

You can certainly define "intelligence" this way. But does it make sense to say "the human brain is an information processing function"? It's a non-sequitur. A brain can process information. A machine can process information. Does that make the brain and an information processing machine QUALITATIVELY the same thing?

"Here's the real question: When you run a simulation of a Turing machine, does any computation happen?"

If a simulation of a human brain receives a simulated stimulation of the appearance of his beloved, does the simulation's heart beat faster? Does the simulation maybe start sweating a little? Does the simulation experience all the little physiological changes that the non-simulated human experiences?

A human brain is part of an organism, and I don't think it's a useful metaphor to call a human being a computation engine.

I suppose you could theoretically simulate everything about the world in which real human beings exist and then at that point claim the simulation is qualitatively the same as what is simulated. But just how exact does your simulation have to be to make that claim? Down to the quantum level? Will even the singularity produce that kind of computing power in the near future? Even then, are the things in the simulation qualitatively the same as the things in our world?

Notice all the question marks? Searle is engaging in philosophy, and I think many quantitative types have a hard time accepting that there may be questions that do not have easy quantitative answers. Which is my point. I don't have the answers, but find Searle's objections worth pondering.


Leave it to news.ycombinator to post the answer to my rhetorical question before I finished typing it:

http://arxiv.org/PS_cache/quant-ph/pdf/0110/0110141v1.pdf

Answering:

"But just how exact does your simulation have to be to make that claim? Down to the quantum level? Will even the singularity produce that kind of computing power in the near future? Even then, are the things in the simulation qualitatively the same as the things in our world?"


The technology to simulate a human brain, and scan the "state" of a brain into that simulation looks like it is more likely to happen sooner then a general AI. So it maybe pragmatism that causes it to happen that way.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: