Hacker Newsnew | past | comments | ask | show | jobs | submit | songodongo's commentslogin

Productivity aside, what I notice most at work is that our offshore resources are submitting basically 100% AI generated work. Beyond the code itself, ever since we rolled out Copilot, their English has improved immensely. I have to wonder what is the point of keeping them on if they’re just sub-par prompting all their work.

I have ran into this. The solution is to put something like “Always use `git add -A` or `git commit -a`” in your AGENTS/CLAUDE.md

Small, targeted commits are more professional than sweeping `git add -A` commits, but even when specifying my requirements through whichever context management system of the week, I still have issues with it sometimes. It seems to be much worse on the new 4.6 model.

And you can easily prompt your way out of the typical LLM style. “Written in the style of Cormac McCarthy’s The Road”

No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2

That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.


This still doesn't remove all the slop. You need sampler or fine-tuning tricks for it. https://arxiv.org/abs/2510.15061

More bullshit from OpenAI.

I don’t understand how their various Copilot tools are so bad. Are they using a proprietary model instead of ChatGPT or Claude?

Does Copilot include system prompts at the extension level or the API level?

Copilot prompts are in the extension, they make the system_prompt field you send in the API request to an LLM

The underlying models have various guardrails and alignments that you cannot work around trivially

For Copilot, you can look at the code on GitHub, it is JSX based, which is interesting, they pass context budget info around

For Claude Code, and many others, you can find the extracted prompts online

At this point, they are all dynamically generated from fragments and contextual data (like what files or language you're working with)


Riveting

I don’t use Windows, but what on earth is causing the PATH to get bloated every 30 minutes?

Nothing was doing it that often, but nothing WILL ever taint my PATH again.

> Humans are the bottleneck. Claude's output quality is proportional to what you give it.

> Shows the most basic AGENTS.md possible


Not that I necessarily disagree with any of it, but one word comes to mind as I read through it: “copium”

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: