Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the descriptions I've read, Watson only buzzed in after it believed it had a solid answer. If it was significantly less accurate than its competitors, it wouldn't have spent the entire match buzzing in before them.

I'm afraid I don't really understand the decision to trivialize the fact that we now have a computer that can answer general-knowledge natural-language queries quickly and about as accurately as a clever person. That's a Big Deal.



For every single clue, there is an overlay showing Watson's top 3 answers, and it shows whether Watson was confident enough to buzz in. In day 2, I counted only 3 times when Watson had a confident response but was beaten to the buzzer.

None of this really minimizes IBM's accomplishment, but it absolutely means this specific presentation (Jeopardy!) lacks weight for those of us who understand what the game dynamics of Jeopardy! are. This is nowhere near as impressive a "man vs. machine" victory as was Deep Blue vs. Kasparov.


Even considering that, I don’t know what exactly makes it less impressive than Deep Blue. Watson correctly knew 84% of the answers in the second round. Looking at past games, that’s very much competitive with humans. To me, this is much more impressive than Deep Blue, even considering the fast reaction times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: