Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What if ChatGPT popularity breaks your 2024 startup?
4 points by neilv on May 4, 2024 | hide | past | favorite | 5 comments
Just before ChatGPT, a cofounder and I were refining the concept of a specialized social site/app.

Unfortunately, the explosive popularity of ChatGPT and similar LLMs among ordinary people, for tasks like cheating on homework... breaks part of the fundamental appeal and mechanism of our site.

(Note that, even before ChatGPT, there's always been strong incentive for bad actors to "game" this application domain, so it's going to happen. Our approach actually was a barrier to those people.)

Is there any way we can prevent LLM-generated content from destroying the key to our startup?

Ideas:

* Honor code, change people's default assumptions about what's appropriate, have them take pride in it.

* Have users police submissions, such as with flagging and/or voting, and hope they can detect enough LLM-generated content that HQ can direct sufficient negative feedback at it.

* Do identity verification, to help suspensions&bans have teeth as a deterrent, and to discourage repeat offenders. Costly, and invasive.

* Find a lawyer who really hates the jerks, and wants to craft a legal way to be able to go after them (including deep-pocketed competitor saboteurs, operating through an intermediary). Like, IANAL, but maybe there's a way to offer a side product, for a billion dollars, to let people use do LLM/shilling/etc., and then it's theft of service with a price tag. Or a better idea an actual smart lawyer could come up with.

Better/additional ideas?



> can prevent LLM-generated content from destroying the key to our startup?

If LLM-generated content can "destroy" your startup, you need a new startup. Programmatically-generated content and people trying to game the algorithm has been a problem for literal decades, and Google/Facebook/Instagram/etc. is still going strong.


My initial thought has been something like that, but I keep going back to the question of whether there's a workaround/mitigation for the crappy behavior we're seeing from people with LLMs.


> Have users police submissions, such as with flagging and/or voting, and hope they can detect enough LLM-generated content that HQ can direct sufficient negative feedback at it.

This is a race to the bottom, as users will fight to flag each other's submissions as AI-generated. At best, you're going to have such a high false-positive rate, that it'll be useless.


You're speaking of the users doing that dishonestly?

Or doing it honestly but being dumb? (Or maybe just a dumb minority is sufficient to ruin things?)


ChatGPT told users our startup is a solution to their question ("how to track the location of a phone number"). It wasn't true (and if it was we'd be millionaires overnight). We dealt with support requests, unhappy users, we had to add messages to the onboarding flow. It's frustrating, though lately the ChatGPT responses got getter, more nuanced, with more warnings. Maybe me giving feedback (clicking "bad response") after searching helped. It didn't break our startup, it was just annoying to deal with users believing the hallucinations while not reading our documentation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: