Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are many intelligent replies here.

5 years ago I wrote the recommendation system that Netflix uses (and has degraded since then). One major problem is in the past certain senior Netflix managers are only interested in self promotion (I would hate to extrapolate to the current ones - even if the extrapolation is reasonable). A/B tests are a perfect device for this. What is a better recommender? They were not interested in improving the product. It is easier to win by politics/lying/obfuscation/omission/plagiarism then come up with better ideas. If the company goes down they move on with a good resume and the games they played (for instance USPTO fraud), are hidden.

Scroll down for my comments here: https://www.reddit.com/r/MachineLearning/comments/6xiwr4/d_w...

Some companies are like this. For instance the "Netflix prize team" at Verizon/Yahoo refuse to share recommendations data with other teams leaving them nothing to do. They work in a bubble and will actively try and remove anyone who might be a competitor.

It's sad that Netflix decided to pursue this work (+ other even more brain-dead projects like rewriting the command-line parser, "switching to OO" - examples of Xavier's initiatives). They could of 5 years ago pushed the beta system that has 40% extra performance. (I'm currently at a factor of 3 better performance).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: