Hacker News new | past | comments | ask | show | jobs | submit login

So is there any information on how they actually implemented watson? My understanding is it's a bayesian machine learning system, but I still don't know how it parses answers, or really does its magic.

Also, if there is anyone who thinks silicon valley has the smartest people around, this type of stuff should change your mind. Facebook is short trousers compared to this. and it's just a tech demo.




Some cool stuff here: http://www-943.ibm.com/innovation/us/watson/

The real challenge behind Watson is the natural language parsing. Instead of abstracting information away from their sources(like a graph), sources seem to have been left intact in sentences in Watson's memory. Watson would read through this information in a way alike to how it interprets a question, and it would try to create links and possible answers based on connections in sentences from many sources(this gives thought on why pun questions are difficult for Watson). I can't speak on behalf of the mathematical implementation of the answer choices, but this is the high level way that Watson finds answers. Those videos talk about the cool stuff behind the algorithmic challenges of Watson.


You aren't joking. I took a few minutes and wrote up a bayesian engine in mathematica. I've got a pretty good start on that already, and as the IBM stuff notes it's embarrassingly parallel. It seems to me the entire problem is parsing. If you can parse well and feed a well formed input to your data layer (and you've fed it enough data) you're golden.

So who wants to build a real Q/A site based on this? Call it hal-18000.


> So who wants to build a real Q/A site based on this? Call it hal-18000.

You'd have to learn it to deal with thick accents like this one: http://www.youtube.com/watch?v=5FFRoYhTJQQ . Honestly, I don't know if that's possible, no matter how much training you'd put into the machine.


Natural Language Processing != Voice Recognition.


> Natural Language Processing != Voice Recognition

This is what I don't get, why should be "language processing" tied to written text? Part of the answer I know, because it's easier for computers to parse, but other than that it doesn't make sense.


Speech recognition is speech recognition. Different problem entirely.


There's a high-level overview in this paper from AI Magazine, Fall 2010: http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf


Also, if there is anyone who thinks silicon valley has the smartest people around, this type of stuff should change your mind.

Watson is an impressive achievement, but there are quite a few companies in Silicon Valley whose engineers could pull this off. It's more a matter of how much money management feels like throwing at it. It's great publicity for IBM, which has to put in a lot more effort than most Silicon Valley companies in order to look cool, but can afford it.


Entirely true. But the point I was trying to make is that the current batch of dotcoms isn't even in the same league as the bigboys. IBM, HP, SGI(?), part of oracle, google, parts of oracle.

There's driven and then there's Smart.


Also, does anyone else really love that architecture at the Watson research center? That just looks like a place I'd want to work. Lots of wood, stone and glass. Love it. Wired had some details on the building and the cafeteria is right out of mad men. So much better than "open plan" (We're too cheap to give code monkeys space).

Here are the pictures: http://www.wired.com/epicenter/2011/02/watson-jeopardy/?pid=...


As you approach it, it looks like an airport. (Dulles, specifically.) I work at the Hawthorne location, so I don't see the imposing architecture everyday. But, we do have a similar board of IBM Fellows, and I often pause at it on my out at the end of the day. It is humbling and inspiring.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: