Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is obviously AI generated, if that matters.

And I have an AI workflow that generates much better posts than this.



I think it's just written by someone who reads a lot of LLM output - lots of lists with bolded prefixes. Maybe there was some AI-assistance (or a lot), but I didn't get the impression that it was AI-generated as a whole.


"Hard truth" and "reality check" in the same post is dead giveaway.

I read and generate hundreds of posts every month. I have to read books on writing to keep myself sane and not sound like an AI.


Absolutely! And you're right to think that. Here's why...


Applogies! You're exactly right, here's how this spans out…


True, the graphs are also wonky - the curves don't match the supposed math.


Yeah that was confusing to me


I wonder why a person from Bombay India might use AI to aid with an English language blog post…

Perhaps more interesting is whether their argument is valid and whether their math is correct.


The thing that sucks about it is maybe his english is bad (not his native language) so he relies on LLM output for his posts. Im inclined to cut people slack for this. But the rub is that it is indistinguishable from spam/slop generated for marketing/ads/whatever.

Or it's possible that he is one of those people that _realy_ adopted LLMs into _all_ their workflow, I guess, and he thinks the output is good enough as is, because it captured his general points?

LLMs have certainly damaged trust in general internet reading now, that's for sure.


I am not pro or against AI-generated posts. I was just making an observation and testing my AI classifier.


The graphs don't line up. I'm inclined to believe they were hallucinated by an LLM and the author either didn't check them or didn't care.

Judging by the other comments this is clearly low-effort AI slop.

> LLMs have certainly damaged trust in general internet reading now, that's for sure.

I hate that this is what we have to deal with now.


I don't know why you do. I found the article interesting, derived value from it. I don't care if it's an LLM or a human that gave me the value. I don't see why it should matter.


It matters to me for so many reasons that I can't go over them all here. Maybe we have different priorities, and that's fine.

One reason why LLM generated text bothers me is because there's no conscious, coherent mind behind it. There's no communicative intent because language models are inherently incapable of it. When I read a blog post, I subconsciously create a mental model of the author, deduce what kind of common ground we might have and use this understanding to interpret the text. When I learn that an LLM generated a text I've read, that mental model shatters and I feel like I was lied to. It was just a machine pretending to be a human, and my time and attention could've been used to read something written by a living being.

I read blogs to learn about the thoughts of other humans. If I wanted to know what an LLM thought about the state of vibe coding, I could just ask one at any time.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: