Hacker News new | past | comments | ask | show | jobs | submit login

I think that's a fair bet. The new wave of "probablistic everywhere" NLP models, though even the very simplest strictly dominate older grammatical methods, are not often capable of taking advantage of a lot of the structure of language and topic that humans are wont to do. It's a cutting-edge accomplishment when NLP algorithms learn prediction of long-range word pairs such as how you almost certainly will see "law" or "marriage" somewhere in a sentence containing the word "annulled" even if that local area of the sentence doesn't seem to call for it. Humans on the other hand are more likely to forget that it's possible to annul pretty much anything else.

I don't own a TV and plan on watching the Jeopardy match later online, so I'm just going to guess about Watson's performance. I think that humans abuse discovered patterns and structure in language and meaning to search through possible interpretations very quickly. Watson on the other hand uses far less structure and a room full of 200 cores to search through everything is knows much less efficiently. I feel like Watson's "strange" answers probably aren't nearly so strange when you realize it's simply being more fair to any possible answer than a human would.

What's scart is this sort of thing---a willingness to consider out of context answers---sounds pretty similar to the kind of behaviors we humans praise as creative!




I think that humans abuse discovered patterns and structure in language and meaning to search through possible interpretations very quickly.

Right, but does that structure really represent a "deeper" understanding or just vast and meticulous optimizations of statistical algorithms similar to Watson's? Or is there a difference?

We feel like we know how we think, but we can't actually explain it in enough detail to reproduce. Humans have a bad history of rationalization and tunnel vision. And now we discover that all the "wrong" ways to think deeply are actually the right ways to make a working AI.

If the AI can fool us into believing that it "understands" then maybe we can fool ourselves in the same way.


I don't honestly feel like we know how we think at all. I do think that statistics is a pretty good bet for the "math of learning" in that it's a sensible way to track how information flows through a model. Furthermore, the combinatorial problems involved need to be tackled just the same by humans so we can maybe try to say that we're studying similar phenomena as the workings of the brain.

Of course, the implementations we build will always be vastly different from their appearance in the brain since the architectures are so extraordinarily different!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: