Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In what world is it appropriate or even legal to decide on refunds via LLM?

Can you give an example that's not ripe for abuse? This really doesn't sell LLMs as anything useful except insulation from the consequences of bad decisions.



Don't think of LLM as completely replacing the support agent here; rather augmenting. A lot of customer service is setting/finding context: customer name, account, order, item, etc. If an LLM chatbot can do all of that, then handoff to a human support agent, there is real cost savings to be had, without reducing the quality of service.


I'd love for others to think that way. I am a very vocal (in my own bubble) advocate for human-in-the-loop ML.


Have you requested a refund off Amazon lately? They have an automated system where, iirc, a wizard will ask you a few questions and then process it, presumably inspecting your customer history and so on. If the system thinks your request looks genuine and it's within whatever parameters they've set, it'll accept instantly, refund you, sometimes without even asking you to send the item back. If it's less sure, it will pass the request on to a human agent to be dealt with like it would have been in the Before Times.

I can see no reason why it would be illegal or inappropriate to use an LLM as part of the initial flow there. In fact I see no reason why it would be illegal for Amazon to simply flip a coin to decide whether to immediately accept your refund. (Appropriateness is another matter!)

I guess you're assuming the LLM would be the only point of contact with no recourse if it rejects you? Which strikes me as very pessimistic, unless you live in a very poorly regulated country.


"Imagine" is the operative word :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: