Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My brush with AI snake oil:

I interviewed at a startup that seemed fishy. They offer a fully AI powered customer service chat as an off the shelf black box to banks. I highly suspect that they were a pseudo AI setup. LinkedIn shows that they are light on developers but very heavy on “trainers”, probably the people who actually handle the customers, mostly young graduates in unrelated fields, who may believe that their interactions will be the necessary data to build a real AI.

I doubt that AI will ever be built, it's just a glorified Mechanical Turk help-desk. I guess the banks will keep it going as long as they see near human level outputs.




Very common AI startup. They use some pre made AI tools (e.g. Watson bot) and resell it to specific industries where they already have the "intent trees" made (common questions/actions the user wants). The trainers are nothing more than analysts that will identify an intent not listed on the tree and configure it there. The devs probably are API and frontend devs, not much AI stuff going on.


I don't think that is in principle problematic (unlike the social problem statements pointed out in the talk). A system which amplifies human resources by filling in for their common activities over time could use sophisticated tech drawing on the latest in NLP. The metric would be a ratio of the number of service requests they handle per day / the number of "trainers" (or whatever name given) compared to the median for a purely human endeavour where every service request is handled by a customer-visible human.

In the Mechanical Turk analogy there is no such capability amplification happening.


My experience with automated "help" desks is that I have to let the automatons fail one after the other until I finally get connected to a real human. Then I can start to state my problem. All that those automated substitutes do is discourage customers from calling at all.


I have a feeling I know _exactly_ which company you're talking about...so it's either just that obvious, or there's more than one of these, or both!


It seems to be the latter. In fact, the trick (should I say, fraud?) is so common that they were even several articles about it in the press over the past two or three years. Even the famous x.ai had (and I guess still has) humans doing the work.

https://www.bloomberg.com/news/articles/2016-04-18/the-human...


I was going to say the same thing!

A couple of weeks ago such a startup based in London contacted me on LinkedIn - the product really hyped AI, but it all seemed very dubious. My guess was it was really a mix of a simple chatbot with a Mechanical Turk-style second line.


I guess the idea would be to get a few contracts, pull down some money from those and then go bust as the costs of the Mechanical Turk become evident?


> go bust as the costs of the Mechanical Turk become evident

I'm afraid you have misspelled "raise a humongous round form SoftBank". It's an easy typo to make, don't feel bad.


Hugh mongous what?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: