Hacker News new | past | comments | ask | show | jobs | submit login
The end of AI winter? (machineslikeus.com)
83 points by jacquesm on Oct 1, 2009 | hide | past | favorite | 66 comments



> machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google's search engine, to name a few.

Do these things really directly descend from pure AI reasearch? Or are they really the result of a bunch of clever, yet extremely specialized algorithms, independently developed, combined with an incredible increase in hardware power? Not to say such things aren't incredible (Google clearly has changed the world), but is it clear that pure AI research has paved the way?

From an outside perspective it seems the "God of the gaps" argument - AI is what AI researchers haven't done yet - is used as a smokescreen to cover up that AI research hasn't really done much in the last 30 years. (Counterexamples without hand-waving?) And not only that, but it consistently, wildly, and systematically makes incredible predictions that don't come true. For example, Kurzweil is clearly a genius, but also wildly deluded and wrong about his time-frames.

Some of you guys will down-mod me for saying this, and you're the same people who also won't admit my side was correct when, in 20 years, automated translation tools still suck and you'll still be making the same old arguments about how AI is what we haven't done yet... (but in ten years you'll be able to upload your brain!)


> From an outside perspective it seems the "God of the gaps" argument - AI is what AI researchers haven't done yet - is used as a smokescreen to cover up that AI research hasn't really done much in the last 30 years.

Here is what you don't understand: There are very, very, very few people who do research on "AI". People research medical diagnosis, or search, or data mining, or speech recognition, or vision, or chess. If you go to a conference on AI, these are the people you will meet. If you look at who the NSF is funding with their "robust intelligence" area, these are the people you will find. You can only say that "AI research hasn't done much in the last 30 years" if you also say "no one has worked on AI in the last 30 years".

update: if you want to see what people in the mainstream (Kurzweil is not mainstream!) are actually working on, try browsing the IJCAI proceedings:

http://ijcai.org/papers09/contents.php


The trouble with that argument is that if you'd have asked anyone (anyone!), in any AI related field in 1980, where AI would be in thirty years what would they have said?

Surely not "Well, for instance one of the greatest achievements might be that we will work on chess algorithms, and we will see some incremental improvements resulting from tweaking certain heuristics and more intelligent pruning through more specialized algorithms and hardcoded chess knowledge. The programs still won't be able to learn in any interesting sense, but with the help of several orders of magnitude of hardware speed, they will be 200 ELO stronger than the best human!"


My point is that there IS no serious "pure" AI research these days. Your image of lots of pure AI researchers wasting their time is a fantasy-- those people don't exist. People work on applications.


"Those people don't exist" is a bit of an exaggeration. Eliezer Yudkowsky exists, for example.

http://news.ycombinator.com/user?id=eyudkowsky

His research is in making AGI (artificial general intelligence) not go skynet by building in morals. That seems fairly pure to me.


Yes, "Those people don't exist" isn't technically correct; perhaps it should have been "Those people don't exist in significant numbers."

The point is that people who call themselves AI researches, the vast majority of the time are not working on AGI but on weak AI.


And yet there are numerous people who work on pure math and theoretical physics.


The DARPA car racing thing seems to be a breakthrough. Granted, 30 years ago people would have expected more. But it is more than just a chess algorithm.


DARPA Grand Challenge? How's that a breakthrough in AI? Sure, it's uses a lot of results of "AI" research, but there's nothing revolutionary about that.


My understanding is that they made a big leap during the challenge - from catastrophic performance in the first run, to several successful drivers in the second run.

Revolutionary or not, I don't think it was trivial to make an autonomous vehicle.


We have cars that drive themselves!


>>if you'd have asked anyone (anyone!), in any AI related field in 1980, where AI would be in thirty years what would they have said?

Uhm, Hans Moravec isn't exactly "anyone". :-)

http://en.wikipedia.org/wiki/Moravec%27s_paradox

Edit: To be clear, Moravec shows that you misrepresent the AI field of that time. But sure, he wasn't the mainstream.


Successes in Speech, Machine Translation, medical diagnosis and data mining successes have all descended from sound theoretical research in statistics and information theory.

For instance, Machine Translation (as we know it today) was originally inspired by the "noisy channel" for speech recognition that is based on Bayes Rule. In speech recognition, the probability of some words given some input waveform is proportional to the probability of someone saying those words times the probability of hearing some wave form given those rules. The same model led to statistical machine translation: if I speak to you in French, what I'm really doing is speaking to you in English, but the "channel" is so noisy that it comes out sounding like French.

Today, the parallels are less clear (for instance, German and English have substantially different word order, and in speech you don't usually have reordering in the channel--though there's actually some cool new research in MT to bring it closer in line with modern speech processing!).

Yes, there are huge amounts of specialization and hacks for each of these fields, but they are (mostly) based around core good statistical ideas.

In fact, some people are worried that the AI community is so focused on log-linear models and the like that we're in some kind of local minimum, and that we're unlikely to work our way out any time soon.

That said, Kurzweil is still wildly deluded, as you suggest.


Do these things really directly descend from pure AI research? Or are they really the result of a bunch of clever, yet extremely specialized algorithms, independently developed, combined with an incredible increase in hardware power?

Is there a meaningful difference?

There's a good case to be made that the human mind (AKA "natural intelligence") is "a bunch of clever, yet extremely specialized algorithms, independently developed, combined with incredible increase hardware power"


The difference is techniques that are developed for special purposes, but turn out to be useful for more general cases, versus techniques that become more and more specialized, resulting in a decreased possibility of ever using them for anything but increasingly specialized areas.

Perhaps the human brain's neural wiring that was required for throwing could have been, and was, subverted into something else. Better negamax alpha-beta algorithms for chess (that require the programmer to know more and more about chess to make any further research progress) will never be useful for anything but chess.

What seems to have died is the dream of any kind of useful generalized intelligence. Any kind!


I think the problem with building generalized intelligence is that it needs to built on a general platform, like the x86 architecture in flexibility but designed to be self-maintaining and self-upgradeable to the highest degree possible.

This means you kind of have to start from scratch as it is not backwards compatible (the idea of a superuser is baked into most archs), which is an enormous amount of effort.

I'm thinking about trying to start a "reboot computing" campaign to get people to think about how we could improve computing if we didn't have backward compatibility to worry about (different security archs, self-maintenance etc).


Please continue to think along those lines, that's one very interesting thing and some people should actually go and do it.

Check out the fleet architecture while you're at it.


What seems to have died is the dream of any kind of useful generalized intelligence

Ever read Steven Pinker's "How the Mind Works" and "The Language Instinct"? He makes good arguments that the human brain doesn't have "generalized intelligence", it has a lot of specific modules, and is less like a single organ for thinking, more like a system of organs that work together.

Ever considered that progress in "generalised" AI may come about when there are enough "specific" AI modules developed that can be joined up?


>>What seems to have died is the dream of any kind of useful generalized intelligence. Any kind!

This looks strange, could you elaborate?

Right now, we seems to be just a few years away from a new age in robotics. They will have some self learning, but at first not be much smarter than insects.

For instance, there are cheap systems that can (roughly) understand what they see. And yes, the robot vision systems are specially built for that -- but the same functionality in animals has afaik also lots of specially built hardware.

Does it really matter if we have to specially build systems, if we can e.g. make system-building-systems as smart tools?

Edit: Some syntax and word choices, etc. Also, on consideration, I make the same point as the GP (StrawberryFrog), but he does it better.

Edit 2: Hmm... Another argument, then: Even if generalized learning will work in practice, it will probably be inferior to networked systems where problems are automatically found and then solved (and updated) from a central location -- like bugs in operating systems. Since everything will be on the net soon, all future generations of robots will probably work like this.


The comparison to insects is very interesting. Now that you said it, it's easy to make a connection between stupid bugs that fly into the light, into windows, walls and simplistic Quake bots or automated vacuum cleaners.


Not new. I quoted Moravec earlier in this thread. Check him out and his arguments about how closely connected computer speed is to complex AI behavior.

(I don't know how correct it is, but Moravec made the predictions decades ago and they seem to follow the development curves quite well.)


You're saying a lot of things here but one of your questions is worth answering:

>Do these things really directly descend from pure AI reasearch(sic)?

With the exception of Google's search engine [with whose internals I am not familiar], I can answer an emphatic "Yes". And so would any knowledgeable current or past researcher in those fields. The early AI researchers did a hell of a lot of good work and much of it remains relevant.

As you demonstrate, many if not most people have no idea of what was actually done back then, much less the lineage of their income-tax software or the control system for their digital camera.

Despite funding cuts AI continued to be an interesting and productive field, and remains so today.


I agree. Also, even if all those things were the result of AI research, that wouldn't imply AI research made any progress towards inventing an AI, just that it's useful for other stuff.


A lot of "narrow AI" (what real AI researchers spend their time doing) has made its way into a lot of products. However I doubt this is what you were thinking of as pure AI.

There's more, though. Neuroscience continues piecing together how brains work. I've heard that the brain's embodied algorithms are recognizable from eg computer vision research. This seems to imply that there is more of a natural ramp-up from "narrow AI" into "humanlike AI" than at first it would appear.


Norvig has made this clear: we don't have any better algorithms, only more data.


EURISKO is a development in the last 30 years (though just barely).

http://en.wikipedia.org/wiki/Eurisko


"pure AI reasearch? Or are they really the result of a bunch of clever, yet extremely specialized algorithms,"

This presumes a dichotomy between "pure AI research" (whatever that is) and "clever algorithms".


The problem with AI is that people try to call simple heuristics and learning algorithms AI, while what we're actually seeing is an overglorified Eliza.

The term Intelligence sets the expectation of "universal learning", not just solving problems we previously thought to be hard. And the research necessary to accomplish that, probably isn't even in the same direction as these fraud AI algorithms. The Biological Computer Laboratory (which died because AI took all the funding) under Foerster probably had a better shot at solving these problems than the AI Lab under Minsky ever had.

This overselling soaked up the funding with empty promises and killed more basic longterm research. Lets hope serious researchers find a way to get their research funded again, despite the AI shills.


> This overselling soaked up the funding with empty promises and killed more basic longterm research. Lets hope serious researchers find a way to get their research funded again, despite the AI shills.

I think this is uncalled for. What makes those who worked on some of these AI problems any different from the founders of an unsuccessful startup? Both have a belief that a particular idea/plan will work and both seek to convince others to join/fund them.

Nobody really knew that many AI problems would be so tough. The people who worked on them expected success. Only through their failures did we know for sure that the problems where a lot harder than we thought.


i agree it was worth a shot, but it's still a shame they took away the funding from people with more promising approaches and stigmatized the field for decades.


It isn't actually clear whether intelligence is anything other than some emergent phenomenon of suitable connected 'simple heuristics' and 'learning algorithms'. If you as me, we also are overglorified Eliza's.


How do you feel about we also are overglorified Eliza's?

(Sorry, couldn't resist.)


There is one big difference: we can reason about reason (meta reason), Eliza can't.

It is not a difference in degree, it is a difference in kind and seems to be unique to higher primates (you can learn a dog to do tricks, but it won't spend its time coming up with new tricks), one of the most interesting science videos I have ever seen is one where a bonobo learns to write a symbol that had been on a computer it used to express its emotions.


Your statement about dogs is untrue. Dogs, and a lot of other animals, have the ability to think creatively, i.e. to come up with new stuff, new tricks. A lot of animals also have self-awareness* (e.g. dolphins, elephants, magpies). Of course, we as humans excel at the task of thinking, since it's one of our greatest advantages, but the task of thinking and being self-aware is found in other animals, so there must be a way of reproducing this.

What spawns creative thinking and self-awareness is of course a mystery, but I think it's a consequence of creating a complex enough system and creating a system that's built on ordered chaos.

* http://en.wikipedia.org/wiki/Self-awareness#Self-awareness_i...


We like to think we can reason about reason, but to some extent we are simply regurgitating prior knowledge, or trying some random combination of old concepts and getting lucky. If this were not the case, we wouldn't have had to wait until the 1900s for Godel's theorem -- we would each discover it independently during childhood. Godel was standing on the shoulders of giants, just as we are when we use his work, or set theory, or mathematical notation, or rules of logic, or language. In a way, our intelligence is the byproduct of a stochastic program that has been running for thousands of years consuming billions of terabytes of memory. It seems rather cruel to expect Eliza to match us under her more limited conditions.

And it is a difference in degree. A dog frequently comes up with new, previously unseen behavior -- burying certain objects, shredding toilet paper, etc. The problem is that most behaviors are not seen as useful by humans, hence they go unrewarded as "tricks". But there are also examples of dogs learning useful behavior on their own, from learning to bark something that sounds like "I love you", to learning to knock down an owner who is about to have a seizure, to learning to fetch the leash when it wants a walk.

Defining "intelligence" as "doing something that humans can do that animals and computers can't" seems somewhat self-centered (although I admittedly cannot think of a better one off the top of my head).


That difference in kind is probably an illusion created by the magnitude of the difference in degree.

From what I've gathered, the consensus, both in neuroscience and in philosophy of the mind, is that consciousness is totally emergent from the "simple" building blocks; there's no specific component unique to higher primates that explains the difference.


Evidence Points To Conscious 'Metacognition' In Some Nonhuman Animals: http://news.ycombinator.com/item?id=830430


There has been some research in formal and universal models of AI. See below for an example.

Universal AI: http://www.hutter1.net/ai/uaibook.htm


Why care about this faddishness?

Anyone with a desktop computer (or pen and paper, for that matter) can do cutting-edge research in GAI. So it's not popular with the VC's and military-industrial-corporate-welfare-system.

The point is if you're interested, just do the research. You don't need $$$. If you want $$$, over-promise about crappy little web apps, and get funding that way.


I am confident that ongoing over-promising will make AI winter a continuing reality.


> will make AI winter a continuing reality.

This will be a good thing. Theoretical development thrived under the AI “winter”.

I am going to throw the rooster in the hen house and say it: It seems that a large part of AI is thinking up new functions with parameters to be tuned. These parameters are called something exotic and words “neural” and “network” is used liberally.


"It seems that a large part of AI is thinking up new functions with parameters to be tuned."

Doesn't that encompass all of Computer Science?


The problem with AI development seems to be the assumption that it's going to be a useful business tool. This is sort of like assuming that an artificial organism created in the lab will make a good secretary.


And 20 years after the introduction of the personal computer into the world of business, hardly anyone has a secretary anymore, although the very elite do have "executive assistants" who may be thought of as doing some of the same work functions.

I'm not being facetious in bringing this up, just attempting to point out that a computerised version of intelligence may not look much like the activities it replaces.

I think Dijkstra said it best "The question of whether computers can think is no more interesting than the question of whether submarines can swim."


Not sure if this is what you're trying to say, but I see it as similar to some MBA somewhere claiming that QuickSort is a business tool. Sure it is... in the loosest sense of the term 'business tool.' AI is a generic term that has many different meanings to many different people. Many people still think of AI as a computer program/robot with sentience.


I think another problem is the expectation of what AI will look like at all. We think of intelligent human-like robots like something from science fiction, when in reality AI could just be a bit of code running on a standard machine with specific and likely specialized inputs and outputs.

Think of an AI machine working with atmospheric data as one "sense" combined with seismic data and some others, with the directed goal of predicting certain types of disasters (tsunamis...?).

Free will and emotions are other assumption we would likely not give these machines, so the worry of self-interest may not exist either, which would aid in making it good at something useful for us.


I think the best defintion of "free will" basically boils down to "unpredictable in detail absent simulation". This sounds weird at first, but the results probably match your intuitive understanding of free will. Given that, I think that any general intelligence approaching human level will have free will.

That doesn't mean that it won't have certain goals, though it remains to be seen whether it will be possible to design a clean goal system with a top-level goal (see also "Friendliness"). Humans clearly do not have this kind of goal system.


Would a good Pseudo-Random number generator satisfy your definition?

(By the way, humans, when asked directly for random numbers, are terrible at the task.)


Well, it satisfies the "free" part, but probably not the "will" part, which implies reasons for actions. I'm not actually sure humans have free will under my definition, but I think we probably do. It could be that some algorithm that's much simpler than an actual human could predict in detail what a given human will do without simulation, in which case I'd be forced to admit that humans don't have free will by my definition.


I took a class with leading AI researcher Patrick Winston in the spring... it really is one of the most fascinating fields right now. If there is to truly be an "AI spring" then it will likely require massive collaboration between not just computer scientists, but researchers in a wide span of fields including neuroscientists, general biologists, psychologists, and even philosophers.

Free will and emotions are other assumption we would likely not give these machines, so the worry of self-interest may not exist either, which would aid in making it good at something useful for us.

The question is, is it even possible to mimic human intelligence without emotion or free will? Who is to say they aren't wholly dependent on one another?

And if any group of people finds out how to imbue a machine with free will, I'd bet my life they'll go through with it.


There's nothing particularly magical about emotion - it's just a low level computation by our subconscious that we don't have conscious access to. Happiness, anger, fear, etc are all responses to the environment we evolved because they produce (or at least, did at one time) useful actions. It's the mechanism the brain used to make predictions based on data before it developed a consciousness.

An AI would make predictions based on data just like a human would, but the mechanism it used to do it would certainly be different.


Indeed. So goes the James-Damasio argument, in any rate.

Perhaps the parent actually was referring to the qualia component to emotion--or indeed qualia in general--which is much more difficult to explain.


It will be much more than just a business tool. But it wont be today.


We do have search engines, which are pretty sweet and vaguely related to artificial intelligence. The future of AI will probably be similar - we'll get more cool stuff, but it will not be Data from Star Trek, it will be something we didn't expect.


I don't see search engines as being 'vaguely related to artificial intelligence' at all.


Apparently Google's Peter Norvig does, e.g. http://video.google.de/videoplay?docid=-6754621605046052935&... (he starts talking about the relationship between Google and AI at 06:40).


I don't think Norvig thinks differently; he's just using the term "AI" differently. "AI" in the context of the OP is "strong AI" or AGI (artificial general intelligence). Dr. Norvig takes the term "AI" to also encompass "machine learning" in the sense of developing algorithms that learn within limited problem domains. When Norvig means AGI, he says it explicitly (see the video at around 7:30 to 8:30), and he also says specifically that Google is not interested in general intelligence.


Well, he's obviously much more qualified than I am to make that observation. But that doesn't make him right automatically, or does it ?

AI has a certain level of expectation associated with it and I do not see google matching that level in any of their released products.

Computers to date do what we tell them to do. As soon as that changes we call it a bug, not a manifestation of intelligence. As long as that view persists I would argue that we have not yet achieved 'AI'.


Dr. Norvig is also one of the authors of a very well-regarded textbook on the current state of the AI field: http://aima.cs.berkeley.edu/


Why not? Suppose you had an agent you could tell "find me the best papers on the subject so and so", you would not consider that intelligent?


Absolutely, that'd be great. Where do I sign up for this service ?


It seems to me that the core dilemma in the AI community has always been - is intelligence a "systems" problem or is it a "general" problem. It seems people first tried a series of general approaches and did not make much progress. Now that people are taking a series of bottom-up approaches in individual domains they are making a lot more progress. So maybe the AI winter is still on for the "general" approach but the thaw is well on its way for the "systems" approach.


"DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However James Hendler who was the manager of the DARPA program at the time has expressed some disappointment with the outcome of the programe."

The Semantic Web is not a good use-case in terms of reputation for modern AI..


Wintermute?


No. Two words: Semantic Web.


Can't seem to see this at work. Saving to see later at home.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: