Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

New account, banal observation, invitation to interact. It's a pattern I've seen recently on what are apparently LLM-powered engagement bots on Twitter.


It's a harmless comment that could even generate some interesting responses. I think the bar for accusing other commentators of being LLMs should be set higher than this.


How do you think it makes people who write ("generate" cheapens it) the interesting responses feel? Answering people's questions has two main rewards to the writer: it feels good to help someone else, and it boosts your ego to pontificate on something you know that others might not. Discovering that you're replying to a bot makes you feel like you've been fooled.

Anyway, I'm more aware than ever now that the people we interact with in text, online, are gradually being diluted by bots, and I don't want to participate in a community of bots.

Maybe the comment isn't generated by an LLM, and I didn't make any direct accusations. But it is a weird first comment. You expect first comments by fresh new accounts to show evidence of having been motivated to create an account because they had something to say - it requires non-zero activation energy. Meanwhile, there's plenty of incentive to try and sway what gets attention here on HN.


Maybe someone is testing their LLM, how good scores it can get on HN.


>who write ("generate" cheapens it)

Eh? I'm saying that the comment could generate (i.e. provoke) interesting responses. I'm not saying that the responses themselves will be "generated" by their authors as opposed to being written.

>Maybe the comment isn't generated by an LLM

Indeed. I think we should definitely bear this possibility in mind!


Accounts like these can be used later on to boost topics.


I understand the potential nefarious motives for spamming the site with LLM comments. It's just that there isn't really any evidence in this case that the comment is generated by an LLM. I wouldn't be surprised if it was, but I also wouldn't be surprised if it wasn't.


There's a huge swarm of recent spam accounts with submissions and generated comments that follow the same naming pattern e.g. https://news.ycombinator.com/submitted?id=MetricsMaverick

It's not much in the way of evidence but could be another reason people find it sus.


It doesn't sound human to me. Yet. See the other comments about portability in sibling threads.


I wonder if someone here is talking to themselves? Asking open questions with the trash account and answering them with the real account.


What are those engagement bots for? Are they farming followers and turn into disinformation machines later? Or is that only to drive engagement to cultivate ad revenue for Twitter, or for some “brand” that is adjacent to their topic?


The way hn karma works, you’re incentivized to accumulate a set of bots that meet the threshold for flagging and downvoting. Barring effective anti-bot/brigading measures (some of which I’m sure exist), such a team of bots could manipulate front page content. That’s a valuable capability for a variety of nefarious aims in this space.

A straightforward one would be either subtly or dramatically tilting sentiment here negative against some mid-cap public tech company and profiting from a short position.

(Assuming you think investors care what gets upvoted on hn…)


Honestly I hope the LLM-powered bots will be hugely beneficial for the online discourse in the long term (and keep creating mayhem in the short-term).

My hope is that long-term people will realize that most of the arguments and outrage they see online is manufactured to boost engagement and simply start ignoring it or seeking private invite-only forums to discuss things.

People mostly failed to realize this when humans controlled fake/bot/troll accounts, but with the proliferation of LLMs the realization is spreading much faster.

I know it sounds naive, but that's my hope - that the global village will once again become local instead.


New fear unlocked.


Your fear is reality: https://replyguy.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: