Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Long Before Superintelligence? (1997) (nickbostrom.com)
59 points by joaorico on Feb 5, 2016 | hide | past | favorite | 60 comments


The thing people don't understand is that in order to simulate human intelligence, you have to be able to simulate TWO things:

1) A human brain

2) An entire human development up to the age of intelligence you are looking for

The first one is not the harder of the two.

Now, many AI researchers believe they can cut corners on the whole simulating an entire human lifetime thing, and that they can use a more impoverished simulation and make up for it on volume... say, just flashing a billion images in front of the AI hoping that's enough to form the specific subset of intelligences you are hoping for. Or letting the AI read the entire internet. But at this point it's an open question whether that could even theoretically lead to generalized intelligence.


In order to simulate human-level intelligence, the machine doesn't necessarily need to be modeled on the human brain. It doesn't even need to use neural nets.

In order to simulate a human (which by definition only has human intelligence) then only your point 1 is true. Point 2 is not, because the standard way to know we've got 1 right is uploading/emulation: take existing humans, scan their brains at a sufficient level of detail (which will probably require destructive scanning), and use that as the base information of the simulation.


The neural nets still need to have experiences that are trained on a realistic simulation of the events which it needs to understand. Historical data does not train the networks the same way that interactive learning does.

There was an old study, in an earlier time of ethical strictures, where they took two kittens, paralyzed one, and strapped it onto the other one and let them run around and do kitten stuff. The paralyzed kitten saw everything the intact kitten did, felt the same breeze on its fur, but was completely blind to the universe. Without interaction, you cannot learn.

In order to provide that interactive environment you either need a robot body or a really rich virtual environment for your AI to grow up in.

At that point, they are developmentally limited to human timescales. No hyper-accelerated intelligence exponential.


> An entire human development up to the age of intelligence you are looking for

I've also found the lack of development time consideration a weakness in the AGI literature I've read. We can create intelligent humans now, but it takes not only reproduction, but immersion in our world and societies for something like 12-18 years for a fully intelligent agent to emerge. Maybe if some future DeepMind algorithms could inhabit a complex world like you find in Skyrim for a decade or two, you'd get something interesting.


You must be reading different AGI literature than me. More generally Embodied/Unembodied AGI has been a common discussion theme for a long time, just as much as the theme of what the right mix is for supervised/unsupervised learning / a priori knowledge. For instance here's a random Goertzel paper from 12 years ago: http://www.goertzel.org/papers/PostEmbodiedAI_June7.htm Perhaps the missing focus on timetables for embodied agents to reach human-level or beyond is annoying (such a timetable is not present in that paper) but to have that timetable problem to begin with would be great since we'd have already solved the much harder problem of creating something that can learn and generalize extremely well within an environment, even if it takes it 12-18 years to reach a sufficient level to be interesting. No matter how many years it takes, so long as we're certain it will get there in some number of years, the problem is greatly simplified to one of hardware and software optimization. So not that interesting by itself.


The brain is a mechanism that turns intelligence into action, language or memory. But who has ever said that intelligence is generated in the brain?


Pretty much everyone?


[Regarding power of artificial intelligence] "...If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024."

I guess his prognostication here depends on super-powerful computing and brain-emulation software. China's Tianhe-2 has already hit 3.3^15 ops, Bostrom was anticipating for 10^14 - 10^17 to be the runway. Now, I am not sure what the state of brain emulation is at the moment but it looks like our biggest snag is there. Researchers are having a hard time bubbling up new paradigms for artificial intelligence software. Anyone have any insight into that?


Off by a few orders of magnitude. Tihane-2 hit 33 pflops or 3.3*10^16 flops or approx 1/3 of the upper bound. Brain simulation is a snag, but it isn't our only snag.

Like you said, it's a general algorithm issue. We do not remotely understand the brain well enough to simulate it. We have very little idea of what an intelligent algorithm (other than brain sim) would look like.

Also, all of these estimates are based on flops and none of them consider bandwidth. We are a few orders of magnitude lower in gigabits/s than we are in flops. I personally think that is where the bottleneck is. 100 billion neurons with a 100 gigabit/second pipe could interact once per second and then only at the level of a toggle switch. Granted not all neurons have to interact with one another, but we are significantly behind in bandwidth and structural organization.

Bandwidth is intimately tied to processing capacity. I dont think the bandwidth will be there until 2045-2065 and like you say we have serious software/algorithm/understanding deficiencies to resolve before then. I would be very surprised if we get general AI before 2065 if ever. I do not expect it in my lifetime and would be pleasantly surprised if it happened.


Oops, excuse my mistaken quote of the Tianhe flops.

Regarding the bandwidth bottleneck, it's fascinating to see that as one hardware problem is overcome, the next one looms even greater. The same is happening with the software, as machine learning, etc. is advancing (as contentious as that statement may be to people deep in the industry) the coming hurdles look even more intimidating.

The algorithms that need to be developed to reach the milestones of intelligence are incredibly difficult. What excites me is evolutionary algorithms that may be harnessed to reach those milestones. This may be a brute-force method, and researchers would have to know what to tell the algorithms to select for at first, but with increasing computational power, running significant amounts of these algorithms in parallel could be negligible. If you see this comment dhj, have you considered evolutionary computation in your predictions? I'd be interested in what you think, as your clarification of the bandwidth problem was enlightening to me.


I agree that some form of evolutionary algorithm will be our path to intelligent software (or a component of it). However, as genetic algorithms are currently implemented I would say the following analogy holds neural_net:brain::evolutionary_algorithm:evolution ...

In other words GAs/EAs are a simplistic and minimal scratching of the surface compared to the complexity we see in nature. The problem is two fold: 1) we guide the evolution with specific artificial goals (get a high score for instance) 2) the ideal "DNA" of a genetic algorithm is undefined.

In evolution we know post-hoc that DNA is at least good enough (if not ideal) for the building blocks. However, we have had very little success with identifying the DNA for genetic algorithms. If we make it commands or function sets we end up with divergence (results get worse or stay the same per iteration rather than better). The most successful GAs are where the DNA components have been customized to a specific problem domain.

Regarding the target goal selection that is a major field of study itself with reinforcement learning. What is the best way to identify reward? In nature it is simple -- survival. In the computer it is artificial in some way. Survival is an attribute or dynamic interaction selected by the programmer.

I believe that multiple algorithmic techniques will come together in a final solution (GA, NN, SVM, MCMC, kmeans, etc). So GA is still part of a large and difficult algorithmic challenge rather than a well defined solution. The algorithmic challenge is definitely non-exponential -- there are breakthroughs that could happen next year or in 100 years.

The bandwidth issue is the main reason I would put AGI at 2045-2065 (closer to 2065), but with the algorithmic issue I would put it post 2065 (in other words, far enough out that 50 years from now it could still be 50 years out). Regardless of the timeframe, it is a fascinating subject and I do think we will get there eventually, but I wouldn't put the algorithmic level closer than 50 years out until we get a good dog, mouse or even worm (c.elegans) level of intelligence programmed in software or robots.


I wonder how that compares to a single AWS data centre.


Good question. In 2013 they hit 0.5 pflops (0.5*10^15) by putting together 26,496 cores in one of their data centers. So I expect they have scaled proportionally and would be around 1-1.5 pflops. That would put them at #50-80 on top500.org. Bandwidth wise they are probably at 10-50 gigabit/s which is where 10G ethernet is and Infiniband FDR starts -- a lot of systems in that range use those technologies for communications (with custom and higher bandwidth options in the top 10).

Current Top500: http://top500.org/list/2015/11/

Amazon 2013: http://arstechnica.com/information-technology/2013/11/amazon...

EDIT: As far as a whole data center is concerned, i'm not sure it would be a direct comparison as bandwidth would not be as high between cabinets. Amazon using their off the shelf tech to make a supercomputer is probably a better indication of how they compare. Of course at 26,496 cores that may be a data center!


Those estimstes assume neurons are all you need to simulate. In the brain neurons interact with the glia. This is interaction looks pretty important.

http://www.scientificamerican.com/article/the-root-of-though...


This very reminiscent of the hype around DNA sequencing.

It turns out that genetics is vastly more complicated than the old "gene = expression" model. By the time you've added epigenetics, environmental control of expression, proteome vs genome, and all kinds of other details, you get something we're barely beginning to understand even now.

"neurons = intelligence" looks like another version of the same thing. My guess is neural nets will turn out to be useful for certain kinds of preprocessing, just as they are in the brain. But GI is going to have to come from something else entirely.


Besides, there's this:

> it seems that speed rather than memory would be the bottleneck in brain simulations on the neuronal level

Currently, memory access is the bottleneck of most applications that need big amounts of data.

Anyway, as we get to his prescribed 100 Tops, we are seeing more and more applications like image identification, and self driving cars.


Right. I mean, it could be that the bottom of the exponential curve looks nothing like the top. We might just be beginning to see the lift off the x-axis in the form of practical AI applications ("soft takeoff"). Who knows what it will be like when these technologies begin to be stacked.


>Researchers are having a hard time bubbling up new paradigms

One of the most promising approaches at the moment seems to be Deep Mind trying to reverse engineer the human brain. Demis Hassabis their main guy did a PhD in neuroscience with that in mind and they are currently trying to kind of replicate the hippocampus.

("Deep Learning is about essentially [mimicking the] cortex. But the hippocampus is another critical part of the brain and it’s built very differently, a much older structure. If you knock it out, you don’t have memories. So I was fascinated how this all works together. There’s consolidation [between the cortex and the hippocampus] at times like when you’re sleeping. Memories you’ve recorded during the day get replayed orders of magnitude faster back to the rest of the brain.")


Modern ML is glorified statistics, and our current chips are completely the wrong architecture for doing statistics (since they try to achieve maximum precision and zero stochasticity at all stages).


There's a more recent article in the New Yorker that follows Mr. Bostrom around a bit and is a good general read: http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...


One sentence in that article made me do a double take: "He was learning to code." Past tense, but it's about what Bostrom was doing at the time of the interview (last year, well after the publication of "Superintelligence").

So the most influential AI doomsayer on the planet has been writing about artificial minds for a couple of decades, without even knowing enough to get a computer to say "Hello, world"? OK...


I believe a key factor in AI doomsayers' thinking is the interconnectedness, complexity and automation of an AI-driven world. The actual presence of intelligence is a red herring: we already have problems with complex automatic systems failing catastrophically: https://en.wikipedia.org/wiki/Northeast_blackout_of_2003


Actually, that article is fluff reporting that doesn't convey Bostrom's central arguments, and spends a lot of time on his lifestyle. I walked away from it thinking that neither the reporter, nor Mr. Bostrom, knew what they were talking about.


That took some... balls, back in 1997.

There were a lot of strong AI sceptics, who repeated on and on: oh, computers can calculate, but can they play chess? Oh, chess was easy, how about understanding what this picture about? Driving cars? Talking like humans? Oh, they can talk now, but do they really _think_?

Reality happens faster than anybody imagined. Except a few visionaires like Bostrom.


Bostrom is not a visionary. He's just a philosopher.

Nietzsche, Ecce Homo "Philosophy, as I have so far understood and lived it, means living voluntarily among ice and high mountains — seeking out everything strange and questionable in existence, everything so far placed under a ban by morality."

I would argue that if philosophy isn't producing Bostroms then it's not doing its job right.


Wait, so Steve Jobs wasn't a visionary. He was just a salesman exploring unexploited markets as any salesman does? Albert Einstein wasn't a visionary. He was just a physicist that explored unconsidered theories as any physicist does?

It seems to me that in philosophy there is just as much "groupthink" as in almost any human endeavor—there is a maybe little less in the hard sciences where the systems give you feedback about whether an idea is correct or not.


You're just biased towards the so-called hard sciences. There is nothing about them specifically that prevents groupthink as you call it. Science deals with what is, not with what ought to be or shouldn't be and so on. Sure, for a scientist the universe kicks back but that is nothing to do with how a scientist chooses what to work on in the first place, and what preconceived notions and frameworks that scientist is operating under. I could give countless examples of scientists comfortably working within ideological frameworks or reasoning using incorrect theories.

The whole point of philosophy is that it is meant to encourage freedom of thought and free-thinking individuals. That's its job spec. I disagree that “there is just as much "groupthink" as in almost any human endeavor” -- if that really is the case then philosophy is failing at what philosophy _ought_ to be succeeding at.

I'm not saying Bostrom isn't an extraordinarily good philsopher, I'm saying that seeing the big picture and going against conventional wisdom and intuition goes with the territory. Don't imagine that I'm running Bostrom down, I very much enjoy reading the guy and listening to his thought processes, I find him to be a very rigourous thinker.

Maybe it's a small quibble, of course he can be both.


While that's unkind to philosophers Bostrom is mostly summarizing Moravec's thinking which was published in the 1990 book Mind Children. Moravec is a tech guy who built a robot for his 1980 PhD at Stanford and went on to be a professor of Robotics.


Let's look at the most important section of the paper. He estimates the processing power of the brain:

The human brain contains about 10^11 neurons. Each neuron has about 5 • 10^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals 10^17 ops. The true value cannot be much higher than this, but it might be much lower.

In other words, there are 5 * 10^14 synapses in the brain, and each synapse transmits up to 100 signals per second, and we can probably encode each signal with 5 bits. That's ~10^17 bits per second.

So, uh... does anybody else notice that that's not an estimate of processing power?

That's an estimate of the rate of information flow between neurons, across the whole brain.

The level of confused thinking here is off the charts. Does this guy not understand that in order to simulate the brain, you not only have to keep track of information flows between neurons, you also need to simulate the neurons themselves?

That's not merely a flaw in his argument. It indicates that he has no idea what he's talking about, at all.

Needless to say, this paper and its conclusions are complete nonsense.


Neural nets require a couple of FLOPs per synapse. The processing power required is a direct function of the number of synapses. Each neuron is essentially applying a particular logical op, and counting the neurons and their inputs gives you the number of ops. I don't get why this seems so objectionable.

Sure, real neurons in the brain might be doing something a couple of orders of magnitude more complicated than the nodes in an ANN, so you could tack on another 10^2 factor to those estimates if you like. But fundamentally, counting synapses is a reasonable way to get a Fermi estimate of the brain's processing power, and Bostrom's estimates are not significantly different from those others have arrived at by similar methods.


You’re right. I didn’t read the paper very carefully, and was myopically focused on the emulating-a-real-brain AI strategy. As in, let’s slice up a real human brain, map the neurons and synapses, and then simulate them as faithfully as possible.

To do that you need a great deal of fidelity in your simulations of neurons, which are enormously complex. But there is an argument to be made that neuronal complexity is incidental to the brain’s overall “computational capacity”; that you could replace the neurons in a human brain with much simpler nodes and still end up with a functional intelligence, after sufficient rewiring.

I don’t think that claim is obvious, but it’s definitely possible. And if it’s true, you can have human-level intelligence for 10^19 ops, given suitable software.

So I apologize for my post. It was over the top and unfair.

All that said, I still disagree with Bostrom’s conclusions. I think he enormously understates the difficulty of creating intelligent software, if we’re not just copying an existing brain.


I formally studied biology not CS, partly out of an interest in AI.

Everyone who thinks superintelligence or even just human or higher-animal level intelligence is right around the corner needs to study genomics, proteomics, molecular biology, and neuroscience. Study them with an open mind and think about what's really going on.

A neuron is not a switch. A neuron is an organism. It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack, and that's just looking at the aspects of gene regulation and enzyme (a.k.a. nanomachine) operation that we understand. There are about 100 billion of these in the brain and every one of them is running in parallel and communicating constantly. There are also about 10 glial cells for every one neuron, and glia are involved in neural computation in ways we know are there but don't yet fully understand. (Seems to be related to longer term regulation of synapse behavior, etc.) Each glial cell also contains a massive gene regulatory network and so on.

The CS and AI fields suffer from a lot of Dunning-Kreuger effect when they talk about biology. The level of processing power and the parallelism that's going on in the brain of a living thing is simply mind numbing. It's as incredible as the sense you get of the scale of the universe when looking at the Hubble Deep Field.

Our present-day computers are toys. We are not even close. It would at least take advances equivalent to the ones that took us from vacuum tube ENIAC to here.

Edit: I don't write off superintelligence categorically though. I think we could achieve forms of it not through pure AI but by deeply augmenting biological intelligence. Genetic and biochemical performance enhancement could also play a role. Imagine having more working memory, perfect motivational control, the ability to regulate your own desire/motivational structure, and needing only a few hours of sleep. Cyborg superintelligence is a possibility in the foreseeable future and it does raise issues similar to those the superintelligence folks raise. So I don't dismiss an intelligence explosion. I just very strongly doubt it would be purely solid state.


>The CS and AI fields suffer from a lot of Dunning-Kreuger effect when they talk about biology

I'm sure this is right, but what about the reverse -- how much do you know about AI?

AI need not be as complex as natural intelligence to be more intelligent. A lot of the complexity in the natural world is due to the blind and haphazard nature of engineering by natural selection. Do we understand, completely, at a molecular level, the physical and control systems of bird and insect flight? Or how fish swim? Probably not. But by understand the principles and applying a certain amount of engineering brute-force, we've produced machines that by many sensible measures out-fly and out-swim natural machines.


>Do we understand, completely, at a molecular level, the physical and control systems of bird and insect flight? Or how fish swim? Probably not.

That's an excellent point. But at the same time, we do have some level of understanding of the mechanics of swimming and flying. The same really can't be said of intelligence.


That depends what you mean by intelligence.

We understand enough to build computers that win at chess, to build computers that run financial trading algorithms, to build Google.

I agree that intelligence is in some ways harder to fully define than flight, but that doesn't mean that we don't have any understanding of any parts of it.


Google's definition (which is a good starting point) of intelligence: "the ability to acquire and apply knowledge and skills."

As far as I know, we have very little if anything in the way of software that accomplishes general learning (not limited to a specific domain).


Of course we are far from reaching human level yet, but generalised Moore’s Law means the number of years until we reach human level is not that far away.

There is of course the issue that since brains evolved rather than being designed that they can be inefficient in their processing. Look at how poor humans are at arithmetic - we need to divert a huge fraction of our processing power to do what a computer designed for arithmetic can do very efficiently.


Is Moore's Law still a thing?

I don't doubt we can go far beyond present compute power since I am far beyond present compute power and I am reading this. But is the economic driver there?

At the endpoint most people use PCs, tablets, and phones to browse the web, write e-mails, play games that are already pretty good, etc. In the cloud we can always just make data centers larger.

There's obviously always a push for speed and density, but is that push still powerful enough to pump the billions upon billions that will be required to make leaps into areas like 3d circuits, photonics, quantum computing, etc.? At what point does the economic driver drop below the threshold needed to overcome the next hurdle?

First we flew in balloons. Then we flew in fixed wing airplanes. Then we motorized them even more and fought wars with them. Then we built jets. Then we broke the sound barrier. Then we went to orbit. Then we built the SR-71 blackbird and pioneered stealth. Then we landed on the moon.

Then nothing happened in aerospace until Elon Musk, and he's just getting back to where NASA should have been in the 80s. Meanwhile the Concorde is still cancelled and commercial flights are no faster than they were in the 70s.

I'm a bit concerned the computing is about to do what aerospace did. I take some of the breathless hype you hear today as a contrarian indicator for this, since before aerospace went comatose we saw this:

http://i.kinja-img.com/gawker-media/image/upload/t_original/...

I hope not but history does rhyme and economies are more powerful than wishes (or even governments).


I'm not sure how seriously we should take Moore's Law when it comes to these things. It applies pretty well so far to the development of silicon-based microprocessors, but at some point, we're going to come up against some hard physical limits on those. Once that happens, we may be stuck until we can come up with something fundamentally new.

We already seem to be up against some limits in a way as far as single-threaded processing power - it doesn't seem to be going up all that fast in the last few major cycles of processor development.


This is why I said generalised Moore's law, not Moore's law. We are pretty much at the limit of current designs, but there is still plenty of room for parallelising computation.

I do agree we are going to need something new to get to human level.


> It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack...

Maybe it's not a fair comparison but I decided to look it up:

Human genome has about 3.2 billion base pairs, which is about 6.4Gbits = 800MB. The size of linux-4.4.1.tar.gz is about 83MB. So, in a sense, the human genome is only about ten times the compressed size of Linux kernel, never mind everything on top of that.


> A neuron is an organism. It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack, and that's just looking at the aspects of gene regulation and enzyme (a.k.a. nanomachine) operation that we understand.

Can you give some more details about this? How are you quantifying the complexity of a neuron and of the AWS stack?


Interestingly, recent research suggests synaptic variability does come to about 5 bits.

http://www.salk.edu/news-release/memory-capacity-of-brain-is...

“We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature.” Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small. But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few. “Our data suggests there are 10 times more discrete sizes of synapses than previously thought.” In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.


Bostrom's paper is based on Hans Moravec's thinking and Moravec's paper is pretty well argued http://www.transhumanist.com/volume1/moravec.htm

Bostrom as a philosopher may be fuzzy on processing power but Moravec who was actually building robots has a pretty good grasp.


In a recent estimate of the bits per synapse they found it to be an order of magnitude higher than previous estimates: http://www.eurekalert.org/pub_releases/2016-01/si-mco012016....


Yup. I want to again quote a paper recommended to me on HN a while back [1]

However, the relevance of Turing model is questioned even in case of present-day computing [33] [34]. Indeed, any computing machine that follows a Turing model would be highly inefficient to simulate the activity of biological neurons and experience an increased slowdown. Since the super-Turing computing power of the brain has its origins in these ‘strong’ interactions that occur inside neurons, current models have missed the most important part. Simply, Nature doesn’t care if the N-body problem has analytical solutions [36] or can be simulated in real time on a Turing machine [37].

...

While previous models have attempted to represent Hamiltonians using Turing machines [35] the paper [1] shows that the Hamiltonian model of interaction can represent itself a far more powerful model of computation. Turing made an important step forward; however, there is no need to limit natural models of computation to Turing models. In this sense, the new framework of computation using interaction is universal in nature and provides a more general description of computation than the formal Turing model. In other words God was unaware of Turing's work and has put forward a better model for physical computation in the brain.

http://arxiv.org/ftp/arxiv/papers/1210/1210.1983.pdf


Seems AGI is the rage these days. David Deutch has an article and outlines a good point. We won't have an AGI before a good theory of consciousness. Some philosopher will first need to explain consciousness in detail (more so than Dan Denett, which already did an amazing job), and then neuroscientists might have to prove that theory right, AND THEN AI researchers will be able to take a stab at it. So I don't think it will just pop in to existence by running some neural network training over and over again.


More likely the engineers will build AGI first and the philosophers try to explain it after.


AI is the wrong way to go looking for superintelligence.

Far more realistic is developing means of organizing humans together effectively enough to achieve superintelligent levels of collaboration.

I think before 2025 is quite reasonable given this approach.


Humans have been trying collaboration for a while and the results have been patchy.


Computers a still far too slow to exhibit any kind of real-time intelligence. I suspect we still need three orders of magnitude improvement.


I'm more concerned with when I can expect to see the regular variety of intelligence.


Yup, the XKCD translations still hold: https://xkcd.com/678/


Nick Bostrom is a peddlar of the apocalypse who has made his name by spreading fear about a fairy creature called superintelligence. He's convinced people to go looking for weapons of mass destruction in snippets of math and code. But the WMDs aren't there any more than they were in Iraq. Nice work if you can get it.


The human brain contains about 10^11 neurons. Each neuron has about 510^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals 10^17 ops.*

This kind of nonsense is why no one should take Bostrom seriously. We did not then and do not know even begin to know how to write software to "simulate" a human brain, or whether such a task is even possible with modern day tools. Multiplying random neurobiology stats times 0.5 bits pulled out of your ass == AI in 2004?

We have "AI" that can drive a car or a copter, play Chess or Go, translate speech to text, do image recognition ... but what we mean by human intelligence is something different. And I see no evidence anyone has made much progress developing a truly biological-like AI even at the level of say, a mouse. Which according to Bostrom's math ought to be doable in a 2U chassis by now, right?

If someone does succeed in writing mouse-AI or dog-AI, I'd believe that could be scaled up to human-level intelligence very rapidly. But it's clear to me there's (at least one) fundamental breakthrough missing from the current approach, because my dog can't play chess or drive a car, but he has a degree of sentience and awareness (and self-awareness) that no 2016 AI even approaches.


I don't know where in the article you see Bostrom predicting AI in 2004. That's a lower bound on when we might achieve the needed hardware capacity.

Bostrom's predictions about hardware advancements have more or less come to pass (closer to the upper than lower bounds, granted). I think it's a common opinion among AI researchers that the level of computing power available today is plausibly sufficient for human-level AI, if only we knew how to build the software.

The closest Bostrom gets to a prediction on software is that the "the required neuroscientific knowledge" to do brain simulation "might be obtained" by 2012. Which has turned out to be optimistic, but is very far from the strawman position of "hardware capability implies software capability" that you seem to be painting him with.


http://www.openworm.org/

We are on the cusp of revolutionary general nematode level intelligence. Soon we'll be able to upload nematode minds to the cloud and they can live forever in the nemamatrix.


"You get used to it. I…I don’t even see the code. All I see is blue squiggle, green squiggle, translucent squiggle."


there seems to be a large group of cs folks who are very out of touch with the fact that we currently are able to build computational systems that replicate regions of cortex (visual cortex, auditory cortex). It's not general intelligence or superintelligence--really dumb animals can see and hear. And there's a lot more to cognition than simply converting external signals into meaningful representations. But I think it's pretty arguable that deep neural nets (or maybe networks of DNNs) DO represent an architecture sufficiently powerful for strong/general AI to develop. And if you can venture far enough to imagine computational systems on non-von-neumann architectures, you might conclude that the level of abstraction that Bostrom is operating around to describe computation isn't total nonsense


>We have "AI" that can drive a car or a copter, play Chess or Go, translate speech to text, do image recognition ... but what we mean by human intelligence is something different.

Says who? Who says that "general" intelligence doesn't just mean doing the same glorified statistics with larger search spaces, leading to greater difficulty weighting-and-searching those spaces with statistical data, leading to a greater difficulty writing learning systems that perform well without discovering new statistical principles?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: