Hacker Newsnew | past | comments | ask | show | jobs | submit | insin's commentslogin

Tests can't tell you if the design of the code is fit for purpose, or about requirements you completely missed or punted on, or that a core new piece that's going to be built upon next is barely-coherent, poorly-performing slop that "works" but is going to need to be actually designed while being rewritten by the next person instead, or that you skipped trying to understand how the feature should work or thinking about the performance characteristics of the solution before you started and just let the LLM drive, so you never designed anything, arriving at something which "works" on your machine and passes the tests which were generated for it, but will hammer production under production loads. Neither will running it on your own machine or in Dev.

No amount of telling the LLM to "Dig up! Make no mistakes!" will help with non-designed slop code actively poisoning the context, but you have to admire the attempt when you see comments added while removing code, referring to the code that's being removed.

It's weird to see tickets now effectively go from "ready for PR" to 0% progress, but at least you're helping that person meet whatever the secret AI* usage quota is for their performance review this year.


> Tests can't tell you if the design of the code is fit for purpose, or about requirements you completely missed or punted on

This is what acceptance tests are for. Does it do the thing you wanted it to do? Design a test that makes it do the thing, and check the result matches what you expect. If it's not in the test, don't expect it to work anywhere else. Obviously this isn't easy, but that's why we either need a different design or different tests. Before that would have been a tremendous amount of work, but now it's not.

(Making this work requires learning how to make it work right. This is a skill with brand-new techniques which 99.999% of people will need over year to learn)

> or that a core new piece that's going to be built upon next is barely-coherent, poorly-performing slop that "works" but is going to need to be actually designed while being rewritten by the next person instead

This is the "human" part I mentioned being irrelevant now. AI does not care if the code is slop or maintainable. AI can just rewrite the entire thing in an hour. And if the tests pass, it doesn't matter either. Take the human out of the loop.

(Concerned about it "rewriting tests" to pass them? You need independent agents, quality gates, determinism, feedback loops, etc. New skills and methods designed to keep the AI on the rails, like a psychotic idiot savant that can build a spaceship if you can keep it from setting fire to it)

> or that you skipped trying to understand how the feature should work or thinking about the performance characteristics of the solution before you started and just let the LLM drive, so you never designed anything

This is not how AI driven coding works. You have to give the AI very specific design instructions. If you do it right, it will make what you want. Sadly, this means most programmers today will be irrelevant because they can't design their way out of a wet paper bag.

(You know how agile eschews planning and documentation, telling developers and product people to just build "whatever works right now" and keep rewriting it indefinitely as they meet blockers they never planned for? AI now encourages the planning and documentation.)


But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it.

Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here.

Pretty sure they are mocking LLM outputs by making their own comment look like as if it came from LLM. It's sarcasm.

I think it means stuff like this, but I can't tell tell when people are parodying stuff any more when it comes to LLMs:

> The squint test: Blur your eyes. Can you still perceive hierarchy? Is anything jumping out harshly? Craft whispers.

https://github.com/Dammyjay93/interface-design/blob/main/.cl...


I wonder if the old "my grandma used to make really detailed UI" prompts are going to be back to move away from the default UI patterns "guardrails"

It means poetry’s back on the menu boys

"We'll build a big centralised store and take slices out of it" still feels like something you should eventually realise your app now needs rather than a starting point, even in libraries which do it without as much ceremony and indirection as Redux.

> Problems are only truly solved by the folks who dedicate themselves to understanding the problem, that is: the folks working on web standards and the other folks implementing them.

It kills me to think of how amazing Web Components could be if those folks had started standardising them _now_ instead of in "competition" with userland component libraries of the time (while punting on many of the essential challenges of developing UI in the DOM those libraries were still evolving solutions for), and introduced more problems entirely of their own making.


This is still the way - jQuery or not - for UI where you can't/don't want to use a component library. I use the same approach for my browser extensions, both for page scripts and options pages. Writing features so you update state then re-render also means you get things like automatically applying option changes live in page scripts, rather than having to reload the page, for free. Just receive the updated options and re-run everything.

Browser extension options pages are mostly a form mapped to what you have stored in the Storage API, so implementing them by handling the change event on a <form> wrapping all the options (no manual event listener boilerplate) then calling a render() function which applies classes to relevant elements (<body> classes are so good for conditionally showing/hiding things without manually touching the DOM), updates all form fields via named form.elements and re-generates any unique UI elements makes it so un-painful to change things without worrying you're missing a manual DOM update somewhere.

My options pages are Zen Garden-ing 5 different browser-specific UI themes from the same markup to match their host browsers, which is a brittle nightmare to maintain in an app which needs to change over time rather than static demo HTML, but once you've tamed the CSS, the state handling and re-rendering is so painless I'm sticking with it for a while yet, even though it would be long-term easier if I used Preact+htm for no-build option components which know what the active theme is and can generate specific UI for it.

My favourite dirty old-school knowledge is still the named global created for an element with an id, why bother selecting an element when it's right there (once you know you need to avoid global name collisions)?. I use those suckers all the time for quick fun stuff and one-off tool pages.

    <h3 id="communityNoteHeading">
      Readers added context they thought people might want to know
    </h3>
    <div>
      <textarea id="communityNote" placeholder="Note" rows="5" style="width: 400px"></textarea>
    </div>
    <button id="communityNoteCopyButton" type="button">Copy</button>
    <script>
      communityNoteCopyButton.addEventListener('click', () => {
        navigator.clipboard.writeText([
          communityNoteHeading.innerText,
          communityNote.value,
        ].join('\n\n'))
        communityNoteCopyButton.innerText = 'Copied'
        setTimeout(() => communityNoteCopyButton.innerText = 'Copy', 1000)
      })
    </script>

They can also put "Google" in the forever-necessary disclaimer

Google AI can make mistakes


I add release notes to a draft Markdown file in the same commit as every change, under the appropriate Breaking / Changes etc. heading, so when I'm ready to release that becomes the next release notes.

I've never seen an auto-generated set of release notes I liked, a list of PRs doesn't cut it.


I hear what you are saying, there is a risk that auto-generated release notes end up as PR-title soup. I put a lot of effort in my script to mitigate against that.

If you are willing and interested enough to take a quick look, here is what my script generated for our 2025 changelog (no hand-curation yet, this is the raw output):

https://raw.githubusercontent.com/confident-ai/deepeval/refs...

I am curious: does this still seem too noisy in your opinion, or is it getting closer? And what would you want to see for breaking changes/migrations to make it actually useful?

I now have 2024 & 2025 generated; to fully hand-curate two years of history just wasn’t practical, so I’m trying to get the "80% draft" automatically and then curate over time.


That matches my experience with LLM-aided PRs - if you see a useEffect() with an obvious LLM line-comment above it, it's 95% going to be either unneccessary or buggy (e.g. too-broad dependencies which cause lots of unwanted recomputes).


Claude Code, make a Claude Code competitor. Make no mistakes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: