This article is so interesting, but I can’t shake the feeling it was written by AI. The writing style has that feel for me.
Maybe that shouldn’t bother me? Like, maybe the author would never have had time to write this otherwise, and I would never have learned about his experience.
But I can't help wishing he'd just written about it himself. Maybe that's unreasonable--I shouldn't expect people to do extra work for free. But if this happened to me, I would want to write about it myself...
I'm regularly asked by coworkers why I don't run my writing through AI tools to clean it up and instead spend a time iterating over it, re-reading, perhaps with a basic spell checker and maybe grammar check.
That's because, from what I've seen to date, it'd take away my voice. And my voice -- the style in which I write -- is my value. It's the same as with art... Yes, AI tools can produce passable art, but it feels soulless and generic and bland. It lacks a voice.
It also slopifies your work in a way that's immediately obvious. I can tell with high confidence when someone at work runs their email through ChatGPT and it makes me think less of the person now that I have to waste time reading through an overly verbose email with very little substance to it when they could have just sent the prompt and saved us all the time.
I manage an employee from another country and speaks English as a second language. The way they learned English gives them a distinct speaking style that I personally find convincing, precise and engaging. I started noticing their writing losing that voice, so I asked if they were using an LLM and they were. It was a tough conversation because as a native English speaker I have it easy, so I tried to frame my side of the conversation as purely my personal observation that I could see the change in tone and missed the old one. They've modified their use of LLMs to restore their previous style, but I still wonder if I was out of line socially for saying anything. English is tough, and as a manager I have a level of authority that is there even when I think it isn't. I don't know the point, except that I'm glad you're keeping your voice.
As a non-native English speaker living in AU, I can offer my opinion in case it's helpful.
Of course I can't speak to the person you mentioned but if you said what you did with respect and courtesy then they probably would've appreciated it. I know I would have. To me, there's no problem speaking about and approaching these issues and even laughing about cultural issues, as long as it's done with respect.
I once had a manager who told me that a certain client finds the way I speak scary. When I asked why, it turns out that they're not expecting the directness in my speech manner. Which is strange to me since we were discussing implementation and requirements and directness and precision are critical and when they're not... well that's how projects fail, in my opinion. On the other hand, there were times when speaking to sales people left me dizzy from all the spin. Several sentences later and I still had no idea if they actually answered the question. I guess that client was expecting more of the latter. Extra strange since that would've made them spend more money than they have to.
Now running my own business, I have clients that thank me for my directness. Those are the ones that have had it with sales people that think doing sales is by agreeing to everything the client says and promising delivery of it all and then just walking away leaving the client with a bigger problem than the one they started with.
I often ask for ai to give only grammar and spelling corrections, and then only a change set I apply manually. In other words the same functionality as every word processor since…y2k?
Why not just use one of those word processors, then? It seems like you'd expend less effort (unless there's an advantage of your approach that I'm missing), since the proof-reading systems built into a Word processor have a built-in queue UI with integrated accept / reject functionality that won't randomly tweak other parts of the paragraph behind your back.
Far better at catching some types of mistakes. Word only has this many hardcoded rules past the basic grammar. LLMs operate on semantics, and pick up on errors like "the sentence is grammatically correct, but uses an obviously wrong term, given the context".
That's not the kind of thing I'd trust to a language model: I'd expect it to persuade me to change something correct to something incorrect more often than it catches a genuine error. But ymmv, I suppose.
I have definitely seen Grammarly make suggestions that are actually wrong, but I think it's generally pretty ok, and it does seem to make fewer mistakes than I normally do.
Sometimes I use incorrect grammar on purpose for rhetorical purposes, but usually I want the obvious mistakes to be cleaned up. I don't listen to it for any of its stylistic changes.
I've had good results with doing similar. My spelling and grammar have always been a challenge and, even when I put the effort into checking something, I get blind to things like repeating words or phases when I try to restructure sentences.
I sometimes also ask for justification of why I should change something which I hope, longer term, rubs off and helps me improve on my own.
I consider myself to be an above average writer and a great editor. I will just throw my random thoughts about something that happened at work, ask ChatGPT to keep digging deeper in my question, I will give it my opinion of what I should do. Ask it to give me the “devil’s advocate” and the “steel man opinion” and then ask it to write a blog post [1].
I then edit it for tone, get rid of some of the obvious AI tells. Make some edits for voice, etc.
Then I throw it into another season of ChatGPT and ask it does it sound “AI written”. It will usually call out some things and give me “advice”. I take the edits that sound like me.
Then I put the text through Grok, Gemini and ask it the same thing. I make more edits and keep going around until I am happy with it. By the time I’m done, it sounds like I something I would write.
You can make AI generated prose have a “voice” with careful prompting and I give it some of my writing.
Why don’t I just write it myself if I’m going through all that? It helps me get over writers block and helps me clarify my thoughts. My editing skills are better than my writing skills.
As I do it more and give it more writing samples, it is a faster process to go from bland AI to my “voice”
[1] my blog is really not for marketing. I don’t link to it anywhere and I don’t even have my name attached to it. It’s more like a public journal.
> By the time I’m done, it sounds like I something I would write.
As a writer myself, this sounds incredibly depressing to me. The way I get to something sounding like something I would write is to write it, which in turn is what makes me a writer.
What you’re doing sounds very productive for producing a text but it’s not something you’ve actually written.
Maybe he just want to summarize things. I'm writing in Spanish. Of course I won't let AI to write this very post ---even in my bad E. But there are things in my Obsidian written in Spanish, by AI. They're sounds like nothing, sometimes you need something to sound that way: informative, aseptical. But it is good to hear about you anyway, when some people thinks, or fake they think, AI can write, let's say, fiction.
I am torn, as someone who is learning Spanish and should be at a strong A1 [1] by the end of the year, I would be horrified to think about posting something in a public forum based on my Spanish speaking ability.
On the other hand, I’ve had enough conversations with Spanish speakers in Florida like at my barbershop and a local bar in a tourist area who speak limited English and I would much rather have real conversations between my broken Spanish and their broken English than listen to or read AI Slop.
[1] according to this scale, I’m past A1 into A2.1 category now. But I still feel like I’m A1
I write to communicate with myself or other people. Just like I use AI to go from I need to do $X based on my ideas and designs to I did $x. It’s not about “art” or “passion”. It’s about a paycheck
I don’t think it needs to be about art or passion. I just don’t think someone who relies entirely on AI generated text can accurately call themselves “a writer.”
I don’t call myself a writer. I call myself an employee who needs to exchange labor for money to support my addictions to food and shelter. I was writing and developing long before AI.
When I’m writing something for work where I know the end goal - I don’t. When I’m streaming random thoughts without any coherent end goal for my blog or my internal notes on something that happened at work as a retrospective I will use it.
Just to be repeat myself, my blog isn’t for marketing, I don’t have any advertising on them, I don’t post a link to it anywhere and I have no idea if anyone besides me has ever read it since I don’t have any analytics. I don’t have my name or contact information on it
I dont buy it can tell if something sounds ai. Multiple times i have given it direct ai slop writing and it could not tell it was ai written. As a matter of fact, it would insist it wasnt.
This flow sounds like what an intern did in pr reviews and it made me want to throw something out a window. Please just use your own words. They are good words and much better words than you may think.
I can’t share links from Gemini or Grok. But they both immediately flagged the first one as AI generated and the second most likely human.
I didn’t actually do anything here except told ChatGPT to rewrite it in the form of an article I found from an old PDF “97 Things a software engineer should know” from 2010, then ask Grok did it sound AI generated (it did), ask Grok to rewrite it to remove tell tale signs (it still kept the em dashes) and then I copied it ba k to ChatGPT.
Could I tell if the last one is AI? Absolutely. Throwing a few "damns" in there didn't convince me. And all the reworking you've done, while it makes it a little more passable, has made it arguably worse in quality. The point of the final article is so muddy. It has no central point and sprawls on and on about random nonsense.
With some human editing to make it sound less douchery or better prompting, do you think you could tell?
In other words - I did no human editing or even played with the prompt.
For instance, I would have definitely reworded this “a solid meeting isn’t just about not screwing up the logistics. It’s a snapshot of how your team actually operate”
The “it isn’t just $x. It is $y” is something that Ai loves to do.
The larger point is AI is really good at detecting its own slop. Gemini is really good at detecting first pass AI Slop from another LLM and out of curiosity I put a few other articles I knew was written before 2022 to see if it gave false positives.
I agree. I use Grammarly for finding outright mistakes (spelling and the like, or a misplaced comma or something), but I don't listen to any of the suggestions for writing.
I feel like when I try writing through Grammarly, it feels mechanical and really homogeneous. It's not "bad" exactly, but it sort of lacks anything interesting about it.
I dunno. I'm hardly some master writer, but I think I'm ok at writing things that interesting to read, and I feel Grammarly takes that away.
The thing is, ask it something right away and it'll use its own voice. Give it lots of data from your own writing through examples and extrapolations on your speech patterns and it will impersonate your voice more. It's like how it can impersonate Trump, it has lots of examples to pull from, you? it doesn't know you. LLMs needs large amount of input to give it a really good output.
I said almost exactly that to a coworker a few hours ago. My writing is me, it’s who I am. But I know that is not true for everyone, and in particular non-native speakers.
I just detest that AI writing style, especially for business writing. It’s the kind of writing that leaves the reader less informed for the effort.
If I were asked a direct question, especially in a job interview, I would be truthful. That answer stops any sniping about using AI and lets me focus on my skills.
Ah, I misunderstood the parent comment as having that disclaimer on the CV itself.
I agree that if asked directly, it makes sense to talk about candidly. Hopefully an employer would be happy about someone who understands their weak spots and knows how to correctly use the tools as an aid.
Asking about AI usage in CV is pointless in my opinion. You are always responsible what reads in there. If they don’t like the writing style, then they don’t.
Interviewers directly asking whatever bothers them is fine IMHO. The alternative is keeping a negative impression when there could have been an insightful exchange, and the candidate also gets to know what to expect from the company.
If you have access to Microsoft Word, I'd customize the grammar checker settings to flag more than what is enabled by default. They have a lot of helpful rules that many are oblivious to because it's all buried deep in the preferences. Then adopt the stance of taking the green lines under advisement but ignore them if your original words suit your preference. That will get you polished up without submitting to AI editorial mundanity.
Honestly, the issue is that most people are poor writers. Even “good” professional writing, like the NY Times science section, can be so convoluted. AI writing is predictable now, but generally better than most human writing. Yet can be an irritating at the same time.
hey, I was almost hacked by someone pretending to be a legit person working for a legit looking company. They hid some stuff in the server side code.. could you turn this into a 10k words essay for my blog posts with hooks and building suspense and stuff? Thank you!
Probably how it went.
Edit: I see the author in the comments, it’s unfortunately pretty much how it went. The worst part is that the original document he linked would have been a better read than this AI slopified version.
I’d personally like to see these posts banned / flagged out of existence (AI posts, not the parent post).
It’s sort of the personal equivalent of tacky content marketing. Usually you’d never see an empty marketing post on the front page, even before AI when a marketer wrote them. Now the same sort of spammy language is accessible to everyone, it shouldn’t be a reason for such posts to be better tolerated
The problem is the same as in academic world; you cannot be sure and there will be false positivies.
Rather, do we want to ban posts with specific format? I don’t know how that will end. So far, marketing hasn’t been a problem because people notice them, and don’t interact with them, and then they are not in front page.
I would agree, but the truth is that I've seen a few technical articles that benefited greatly from both organization and content that was clearly LLM-based. Yes, such articles feel dishonest and yucky to read, but the uncomfortable truth is that they aren't all stereotypical "slop."
No, you're right. Writing is very expressive; you can certainly get that feeling from observing how different people write, and stylometry gives objective evidence of this. If you mostly let AI write for you, you get a very specific style of writing that clearly is something the reinforcement learning is optimizing for. It's not that language models are incapable of writing anything else, but they're just tuned for writing milquetoast, neutral text full of annoying hooks and clichés. For something like fixing grammar errors or improving writing I see no reason to not consider AI aside from whatever ethical concerns one has, but it still needs to feel like your own writing. IMO you don't even really need to have great English or ridiculous linguistic skills to write good blog posts, so it's a bit sad to see people leaning so hard on AI. Writing takes time, I understand; I mean, my blog hardly has anything on it, but... It's worth the damn time.
P.S.: I'm sure many people are falsely accused of using AI writing because they really do write similarly to AI, either coincidentally or not. While I'm sure it's incredibly disheartening, I think in case of writing it's not even necessarily about the use of AI. The style of writing just doesn't feel very tasteful, the fact that it might've been mostly spat out by a computer without disclosure is just the icing on the cake. I hate to be too brutal, but these observations are really not meant to be a personal attack. Sometimes you just gotta be brutally honest. (And I'm speaking rather generally, as I don't actually feel like this article is that bad, though I can't lie and say it doesn't feel like it has some of those clichés.)
Your comment looks like it was Ai generated. I can tell from some of the words and from seeing quite a few AI essays in my time.
But seriously, anyone can just drive by and cast aspersions that something's AI. Who knows how throughly they read the piece before lobbing an accusation into a thread? Some people just do a simple regexp match for specific punctuation, eg /—/ (which gives them 100% confidence this comment was written by AI without having to read it!) Others just look at length, and simply anything think is long must be generated, because if they're too lazy to write that much,
everyone else is as well.
Well, why don't you practice what you preach? There's no need to make drive-by allegations if there is information available to you. And there is: the author responded in this thread.
Regardless, you're reading a lot of things into my comment that aren't actually there, but even if they are, I certainly didn't mean them that way. My comment wasn't about comments where someone sat down and thought about it and took the time to give reasons for their beliefs, it was about comments like https://news.ycombinator.com/item?id=45596745 that do nothing for the discussion, so that receiving one like that can be dismissed without a second thought.
> Gee, what got your tensors all in a twist? I thought everyone was supposed to just brush off being called AI?
Well, first of all, I said that being accused of using AI assistance must be "incredibly disheartening". If you read my post and really came away with the opinion that I think being accused of using AI assistance is not a big deal, well, dunno what to say, I pretty much said the exact opposite.
But second of all, I wasn't expressing my offense at the joke you made, and despite what I just said, I basically don't personally care about being accused of using AI assistance to write. I already write weird: I use semicolons pretty frequently in long paragraphs, and sometimes I even use em dashes—though unlike what I've seen from ChatGPT output, I don't tend to add spaces around it. I think I write weird enough that nobody would seriously mistake my text for being AI-generated, especially because to be honest, it's not particularly good. I don't have insecurity about the humanity of the text I write; I've written an inordinate amount of comments on this site, many prior to GPT-2 existing, and they're all probably pretty stylistically consistent, so I think I'm somewhat grandfathered in.
What I was expressing was disappointment that you came around to scold people for making baseless accusations when, in my opinion, the accusations were in fact not baseless. You questioned how "thoroughly" they read the piece. Well, I mean, I read the entire piece, it wasn't that long, and I came away agreeing with the comment I ultimately replied to. I'm definitely more offended by the idea of being accused of having made a baseless accusation than the idea that my text was actually written by ChatGPT or Gemini or something.
Yes, I know. It's an old meme by Internet standards, but not one I would forget easily.
> My comment wasn't about comments where someone sat down and thought about it and took the time to give reasons for their beliefs, it was about comments like https://news.ycombinator.com/item?id=45596745 that do nothing for the discussion, so that receiving one like that can be dismissed without a second thought.
Look, when someone replies bluntly to me, I tend to reply bluntly back. I get that you added some memes and an xkcd reference, but I still took your comment to be rather blunt due to what it was insinuating about me and the person I was replying to. I'm not foaming at the mouth or anything, it's totally fine. (You know, "please dont put in the newspaper that i got mad.") With that having been said: you really have to acknowledge the fact that it's not fair to get mad at me for reading your comment in the context of the comment you actually replied to (mine) rather than the comment someone else made in a different part of the thread that you didn't. I know that replying higher up the comment stack is kind of important if you want your comment to actually be read by anyone on HN, but if that results in your comment being in the completely wrong place, you can't get too mad at people for being baffled by it.
If what you wanted to do was reply to a comment you thought was not constructive, then you should've picked one, or perhaps simply flagged it. I realize there's little to no satisfaction in flagging a comment, but if you really think it isn't productive, it's the best way to vote for that.
Thank you for the thoughtful reply! We're both just typing words into the ether for LLMs to pick up these days, but I wanted to acknowledge that your effort and kindness in your response did not go unnoticed by me (a human).
>but I can’t shake the feeling it was written by AI.
After I read this article, I thought this whole incident is fabricated and created as a way to go viral on tech sites. One immediate red flag was: why would someone go to these lengths to hack a freelancer who's clearly not rich and doesn't have millions in his cryptowallet. And how did they know he used Windows? Many devs don't.
Ah, you might say, maybe he is just one of the 100 victims. Maybe but we'd hear from them by now. There's no one else on X claiming to have been contacted by them.
Anyway, I'm highly skeptical of this whole incident. I could be wrong though :)
Yeah, people hate that. It just instantly destroyed the immersion and believability of any story. The moment i smell AI every single shred of credibility is completely trashed. Why should i believe a single thing you say? How am i to know in any way how much you altered the story? I understand you must be very busy but straight up the original sketch is better to post than the generic and sickly ai'ified mushmash
Thanks for letting us know, but it’s offensive to your readers. Please include a section at the beginning of the article to let us know. Otherwise you’re hurting your own reputation
Seriously, just do things yourself next time. You aren't going to improve unless you always ride with training wheels. Plus, it seems you saved no time with AI at all.
Next time maybe just post the base write up and the prompt?
What value does the llm transformation add, other than wasting every reader's time (while saving yours)?
People are often unconfident about their own writing. But if you can feed it to a LLM and have the LLM output something that looks coherent, your writing is good enough to publish.
Indeed, the LLM is not going to add (real) information; I'd say, publish both what you wrote and what the LLM spat, if you think someone would prefer the latter
If you publish the input instead, if other people feel it's not polished or whatever, they can run it through an LLM.
What OP did was destroy value instead of create it, you can always run it through another LLM with another prompt if you have the input, but you can't go backwards.
The first paragraph feels like a parody of one of the LinkedIn marketing professional that receives a valuable insight from a toddler when their pet goldfish was run over by a car.
Very obvious writing style but also the bullet points that restate the same thing in slightly different ways as well as the weirdly worded “full server privileges” and “full nodejs privileges”.
Like… yes running a process is going to have whatever privileges your user has by default. But I’ve never once heard someone say “full server privileges” or “full nodejs privileges”…. It’s just random that is not necessarily wrong but not really right either.
My issue with the article's repeated use of a Title + List of Things structure isn't that it's LLM output, it's that it's LLM output directly, with no common sense editing done afterwards to restore some intelligent rhythm to the writing.
Does anyone know if this David Dodda is even real?
He is a freelance full stack dev that “dabbles”, but his own profile on his blog leaves the tech stack entry empty?
Another blog post is about how he accidentally rewired his mind with movies?
Also, I get that I’m now primed because of the context, but nothing about that linkedin profile of that AI image of the woman would have made me apply for that position.
Lately, has everyone actually seen that image of the woman standing in front of the house??? I sure have not and it’s unlikely anyone has in post-AI world. Sounds more like AI appeal to inside knowledge go build report.
My assumption is that people absolutely did, and do, write like that all the time. Just not necessarily in places that you'd normally read. LLM drags up idioms from all over its training set and spews them back everywhere else, without contextual awareness. (That also means it averages across global cultures by default.)
But also, over the last three years people have been using AI to output their own slop, and that slop has made its way back into the training data for later iterations of the technology.
And then there's the recent revelation (https://www.anthropic.com/research/small-samples-poison , which I got from HN) that it might not actually take a whole lot of examples in the data for an LLM to latch onto some pattern hard.
I had the same feeling, but also the feeling that it was written for AI, as in marketing. That’s probably not the case, but it looks suspicious because this person only found this issue using AI and would’ve otherwise missed it, and then made a blog post saying so (which arguably makes one look incompetent, whether that’s justifiable or not, and makes AI look like the hero).
- The class of threat is interesting and worth taking seriously. I don't regret spending a few minutes thinking about it.
- The idea of specifically targeting people looking for Crypto jobs from sketchy companies for your crypto theft malware seems clever.
- The text is written by AI. The whole story is a bit weird, so it's plausible this is a made up story written by someone paid to market Cursor.
- The core claim, that using LLMs protect you from this class of threat seems flat wrong. For one thing, in the story, the person had to specifically ask the LLM about this specific risk. For another, a well-done attack of this form would (1) be tested against popular LLMs, (2) perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves, or (3) Hide the shellcode in an `npm` dependency, so that the attack isn't even in the code available to the LLM until it's been installed, the payload delivered, and presumably the tracks of the attack hidden.
> be tested against popular LLMs, perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves
My sense is that the attack isn't nearly as sophisticated as it looks, and the attackers out there aren't really thinking about things on this level — yet.
> Hide the shellcode in an `npm` dependency
It would have to be hidden specifically in a post-install script or similar. Which presumably isn't any harder, but.
The philosophically interesting point is that kids growing up today will read an enormous amount of AI content, and likely formulate their own writing like AI. I wouldn't be surprised if in 20 years a lot of journalism feels like AI, even if it's written by a human
Your comment was so validating, I was getting such weird vibes and felt it was so dumbly written given the contention was actually good advice. Consequently, the author tarnished his reputation for me personally from the very beginning.
I think it only really has that feel if you use GPT. I mean, all AIs produce output that sounds kinda like it was written by an AI. But I think GPT is the most notorious on that front. It's like ten times worse.
So really the feeling I get when I run into "obviously AI" writing isn't even, "I wish they had written this manually", but "dang, they couldn't even be bothered to use Claude!"
(I think the actual solution is base text models, which exist before the problem of mode collapse... But that's kind of a separate conversation.)
I mean, they are different, but there is only a subset of like 3 big model providers. And we see hundreds of thousands+ of words of generated content from each, probably. It is easy to become very familiar with each output.
Claude vs GPT both sound like AI to me. While GPT is cheery Claude is more informative. But both of them have "artifacts" due to them trying to transform language from a limited initial prompt.
The important part for me is that the experience is legitimate, and secondarily that it's well written. The problem for me with LLM-written texts are that they're rarely very well written, and sometimes unauthentic.
If we had really good AI writing, I wouldn't mind if poor authors used that to improve how they communicate. But today's crop of AI are not that good writers.
I have been told I am "AI" because I was simply a bit too serious, enthusiastic and nerdy about some topic. It happens. I put more effort into such writings. Check my comment history and you will find that many comments from me are low-effort: including this one. :)
The sentence structure is too consistent across the whole piece, like they all have the same number of syllables, almost none start with a subject, and they are all very short. It is robotic in its consistency. Even if it’s not AI, it’s bad writing.
Yes, it's an informative and important article. I think the complaints here are absurd. Hopefully the people not reading it for silly reasons won't become the victims of similar social engineering.
Agree. People need to chill out. I thought there might be some discussion in the comments about the scam. Unfortunately, no. Complete waste of time to read and much more annoying than the original article.
Between that, the seemingly excessive skepticism it happened, and "lol blockchain was your first mistake", I half-wondered if it wasn't a deliberate effort to dilute the alarm bells that we should all learn from here
Then I had a different thought: perhaps it's a mental defense mechanism at the unease at realizing how plausible it would be for many of us to fall prey to this scam
This isn't a groundbreaking article. We already knew about this kind of scam, so why trudge through an article written by an LLM doing its best Linkedin grifter impression?
> This article is so incredibly interesting, but I can’t shake the feeling it was written by AI. The writing style has all the telltale signs.
The sadder realization is that after enough AI slop around, real people will start talking like AI. This will just become the new standard communication style.
Chatgpt is just an aggregate of how the terminally online, talk, when they have to act professional.
Chatgpt is hardcoded to not be rude (or German <-- this is a joke).
So when you say, "people will start talking like AI". They are already doing that in professional settings. They are the training data.
As someone who writes with swear words and personality. I think this era is amazing for me. Before, I was seen as rude and unprofessional. Now, I feel like I have a leg up, over all this AI slop.
> They are already doing that in professional settings. They are the training data.
It's a self-reinforcing cycle. AI sucks up and barfs back up the same bland style and eventually books, articles, news will all look even more bland and sound more AI like. That junk then will be sucked up by the next AI model, and regurgitate into some even more bland uniform format. If that's all the new generation hears and sees, that's how they'll perceive one should "talk" or "write".
> Authenticity is valued now. Swearing is in vogue.
Ha! That's a good point, I like that. Not that swearing is my style (unless I stub my toe), but I agree with the general authenticity point. Maybe until the interns at Google and OpenAI will figure out how to make their LLM sounds more "hip" and "authentic".
Even now, I think many people are not literate enough to see that it’s bad, and in fact think it improves their writing (beyond just adding volume).
Maybe that’s a good thing? It’s given a whole group of people who otherwise couldn’t write a voice (that of a contract African data labeller). Personally I still think it’s slop, but maybe in fact it is a kind of communication revolution? Same way writing used to only be the province of the elite?
Except, the interface to ChatGPT is writing! People who can't write can't use ChatGPT: if you can use ChatGPT, then you can write. (You might lack confidence, but you can write.)
People who cannot write who try to use ChatGPT are not given a voice. They're given the illusion of having written something, but the reader isn't given an understanding of the ChatGPT-wielder's intent.
I read this comment first then attempted to read this article but whether it's this inception or it's genuinely AI-ish, I'm now struggling to read this article.
The funny thing is, for years I've had this SEO-farm bullshit content-farm filter and the AI impact for me has been, an increasing mistrust of anything written by humans or not. I don't even care if this was AI written, if it's good, great! However, the... 'genuine-ness' of it or lack of it, is an issue. It doesn't connect with me anymore and I feel/connect to any of it.
The era of the AI bubble economy has arrived, and now almost everyone is interacting with and using AI. Just like your feeling, this is an article organized with GPT. Perhaps the story really happened.
Maybe that shouldn’t bother me? Like, maybe the author would never have had time to write this otherwise, and I would never have learned about his experience.
But I can't help wishing he'd just written about it himself. Maybe that's unreasonable--I shouldn't expect people to do extra work for free. But if this happened to me, I would want to write about it myself...