The difference is techniques that are developed for special purposes, but turn out to be useful for more general cases, versus techniques that become more and more specialized, resulting in a decreased possibility of ever using them for anything but increasingly specialized areas.
Perhaps the human brain's neural wiring that was required for throwing could have been, and was, subverted into something else. Better negamax alpha-beta algorithms for chess (that require the programmer to know more and more about chess to make any further research progress) will never be useful for anything but chess.
What seems to have died is the dream of any kind of useful generalized intelligence. Any kind!
I think the problem with building generalized intelligence is that it needs to built on a general platform, like the x86 architecture in flexibility but designed to be self-maintaining and self-upgradeable to the highest degree possible.
This means you kind of have to start from scratch as it is not backwards compatible (the idea of a superuser is baked into most archs), which is an enormous amount of effort.
I'm thinking about trying to start a "reboot computing" campaign to get people to think about how we could improve computing if we didn't have backward compatibility to worry about (different security archs, self-maintenance etc).
What seems to have died is the dream of any kind of useful generalized intelligence
Ever read Steven Pinker's "How the Mind Works" and "The Language Instinct"? He makes good arguments that the human brain doesn't have "generalized intelligence", it has a lot of specific modules, and is less like a single organ for thinking, more like a system of organs that work together.
Ever considered that progress in "generalised" AI may come about when there are enough "specific" AI modules developed that can be joined up?
>>What seems to have died is the dream of any kind of useful generalized intelligence. Any kind!
This looks strange, could you elaborate?
Right now, we seems to be just a few years away from a new age in robotics. They will have some self learning, but at first not be much smarter than insects.
For instance, there are cheap systems that can (roughly) understand what they see. And yes, the robot vision systems are specially built for that -- but the same functionality in animals has afaik also lots of specially built hardware.
Does it really matter if we have to specially build systems, if we can e.g. make system-building-systems as smart tools?
Edit: Some syntax and word choices, etc. Also, on consideration, I make the same point as the GP (StrawberryFrog), but he does it better.
Edit 2: Hmm... Another argument, then: Even if generalized learning will work in practice, it will probably be inferior to networked systems where problems are automatically found and then solved (and updated) from a central location -- like bugs in operating systems. Since everything will be on the net soon, all future generations of robots will probably work like this.
The comparison to insects is very interesting. Now that you said it, it's easy to make a connection between stupid bugs that fly into the light, into windows, walls and simplistic Quake bots or automated vacuum cleaners.
Not new. I quoted Moravec earlier in this thread. Check him out and his arguments about how closely connected computer speed is to complex AI behavior.
(I don't know how correct it is, but Moravec made the predictions decades ago and they seem to follow the development curves quite well.)
Perhaps the human brain's neural wiring that was required for throwing could have been, and was, subverted into something else. Better negamax alpha-beta algorithms for chess (that require the programmer to know more and more about chess to make any further research progress) will never be useful for anything but chess.
What seems to have died is the dream of any kind of useful generalized intelligence. Any kind!