This is the best overview I have ever seen, and I have had a passing interest in Cyc since I first read about EURISKO. Hands down, this is great: thorough, historical ... perhaps a little more negative than I would like.
My personal "take," which is worth nothing, is that something like Cyc is necessary but not sufficient for a serious AGI. Many other components will probably be required.
Yes, we are gonna need natural language parsing, and a good one, unless we can make the whole world speak in lojban. It would certainly speed up the collection of assertions.
I suspect embodiment is also a requisite, wherein one can acquire experiences and test premises. The ability to explore your surroundings, to move, to investigate. To jam your sensory cluster where it does not belong. To drop a block and see it fall. To drop a balloon and see it rise -- surely that must suggest an avenue of investigation.
I would be surprised if neural networks will not make a showing. Recognition through the fuzziness and quirks of reality is needed before you can sufficiently abstract into various rules. That isn't a new species of animal, it's a cat without fur and with an unusual number of limbs (Sphynx with a missing leg). Recognize, then expand the knowledge: cats usually have fur, but some breeds do not. Cats usually have four limbs, but they can be lost. But before you can say that, you still have to recognize the catness of the beast before you.
Another probable component: emotion. e-motion. Motion. Drive. In a way, hunger is an emotion, as is a full bladder. These are prompts to do something that one might return to equilibrium and focus on the higher level problems. Buildup of carbon dioxide, that one surely prompts you. These are currently tied to our embodiment, but parallels can be worked out, such as Opportunity's last ... words ... "My battery is low and it's getting dark." Right now, our attempts at intelligences are still reactive. Waiting for a prompt, waiting for a problem to be put before them. I suspect a strange loop (or really a series of interconnected ones) might be one of the final components, with the emotions serving as a way to keep it from lapsing into catatonia. Boredom would be one. Most of our human emotions would be counter-productive, but probably not all, and there may be analogues.
As a subcomponent of that, I believe in laughter. Babies laugh a lot. We laugh at jokes. No, we laugh at new jokes; old jokes grow progressively less and less funny. Laughter seems to be often prompted by a new connection, and it is both a reward and a communication.
I have covered both textual ingestion and experiential evidence, but let's throw in another one: I expect that the ability to examine an expert system and replicate it, to incorporate it into its mind, is one of them. Think about those old flowcharts in hardware books which enable you to diagnose a computer with boot issues. We can do it and I expect an AGI ought to be able to examine other rulesets for ingestion as well.
In short, I am suggesting that our first AGI may require all of the things a human does to be intelligent. We may find that we can do without one or two components. While I most certainly do not want a Human in a Box to be the end product, for a number of reasons, it does seem reasonable to me that we ought to incorporate the different things we already know about the human mind into something we are hoping to be an equivalent.
However, I strongly suspect that giving it full access to something like Cyc would beat just chucking encyclopedias at it and telling it to read.
My personal "take," which is worth nothing, is that something like Cyc is necessary but not sufficient for a serious AGI. Many other components will probably be required.
Yes, we are gonna need natural language parsing, and a good one, unless we can make the whole world speak in lojban. It would certainly speed up the collection of assertions.
I suspect embodiment is also a requisite, wherein one can acquire experiences and test premises. The ability to explore your surroundings, to move, to investigate. To jam your sensory cluster where it does not belong. To drop a block and see it fall. To drop a balloon and see it rise -- surely that must suggest an avenue of investigation.
I would be surprised if neural networks will not make a showing. Recognition through the fuzziness and quirks of reality is needed before you can sufficiently abstract into various rules. That isn't a new species of animal, it's a cat without fur and with an unusual number of limbs (Sphynx with a missing leg). Recognize, then expand the knowledge: cats usually have fur, but some breeds do not. Cats usually have four limbs, but they can be lost. But before you can say that, you still have to recognize the catness of the beast before you.
Another probable component: emotion. e-motion. Motion. Drive. In a way, hunger is an emotion, as is a full bladder. These are prompts to do something that one might return to equilibrium and focus on the higher level problems. Buildup of carbon dioxide, that one surely prompts you. These are currently tied to our embodiment, but parallels can be worked out, such as Opportunity's last ... words ... "My battery is low and it's getting dark." Right now, our attempts at intelligences are still reactive. Waiting for a prompt, waiting for a problem to be put before them. I suspect a strange loop (or really a series of interconnected ones) might be one of the final components, with the emotions serving as a way to keep it from lapsing into catatonia. Boredom would be one. Most of our human emotions would be counter-productive, but probably not all, and there may be analogues.
As a subcomponent of that, I believe in laughter. Babies laugh a lot. We laugh at jokes. No, we laugh at new jokes; old jokes grow progressively less and less funny. Laughter seems to be often prompted by a new connection, and it is both a reward and a communication.
I have covered both textual ingestion and experiential evidence, but let's throw in another one: I expect that the ability to examine an expert system and replicate it, to incorporate it into its mind, is one of them. Think about those old flowcharts in hardware books which enable you to diagnose a computer with boot issues. We can do it and I expect an AGI ought to be able to examine other rulesets for ingestion as well.
In short, I am suggesting that our first AGI may require all of the things a human does to be intelligent. We may find that we can do without one or two components. While I most certainly do not want a Human in a Box to be the end product, for a number of reasons, it does seem reasonable to me that we ought to incorporate the different things we already know about the human mind into something we are hoping to be an equivalent.
However, I strongly suspect that giving it full access to something like Cyc would beat just chucking encyclopedias at it and telling it to read.