Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For example, in the sentence "The cat, which is black, sat on the mat," the words "cat" and "black" would get high scores when trying to understand the word "black" because they are closely related.

So what does that actually mean in terms of looking at new text? How does it know the relationships? Does it have to be bootstrapped on labeled data for a specific language up front?

Is that something done in the training process - providing example sentences and illustrating the connections between words - or is that earlier?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: