Whenever I need some sort of quick data pipeline to modify some sort of file into another format, or do some batch transformation, or transform some sort of interface description into another syntax, or things like that, that would normally require me to craft a grep, awk, tr, etc pipeline, I can normally simply paste a sample of the data and with a human language description get what I need. If it’s not working well I can break up the steps in smaller steps.
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
Every kind of project is faster with AI, because it writes the code faster.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
With web apps playwright-mcp[0] is essential IMO. It lets the AI Agent check its own work before claiming it's done.
With that it can see any errors in the console, click through the UI and take screenshots to analyse how it looks giving it an independent feedback loop.
Pretty much what somebody else said: AI takes over simple tasks, the "fluff" around the business logic, error handling, stuff like that, so I can focus on doing the harder stuff at the core.
> 90% of what the average (or median) coder does isn't in any way novel or innovative. It's just API Glue in one form or another.
I hear this from people extolling the virtue of AI a lot, but I have a very hard time believing it. I certainly wouldn't describe 90% of my coding work as boilerplate or API glue. If you're dealing with that volume of boilerplate/glue, isn't it incumbent upon you to try and find a way to remove that? Certainly sometimes it isn't feasible, but that seems like the exception encountered by people working on giant codebases with a very large number of contributors.
I don't think the work I do is innovative or even novel, but it is nuanced in a way I've seen Claude struggle with.
To be more exact, 90% of the _code_ I write is mostly just different types of API glue. Get data from this system, process it and put it in another system.
It's the connectors that are 90-95% AI chow, just set it to task with a few examples and it'll have a full CRUD interface for your data done while you get more snacks.
Then you can spend _more_ of your limited time on the 10% of code that matters.
That said, less than 50% of my actual time spent on the clock is spent writing code. That's the easiest part of the job. The rest is coordinating and planning and designing.
Because I need to have a controller that does CRUD operations.
It has a specific amount of code I need to write just to add the basic boilerplate of receiving the data and returning a result from the endpoint before I can get to the meat of it.
IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
That's why we have snippets and code generators for those. And even in the absence of that, I usually copy code from some other place and gut it out to make place for the new logic. This can be done in 30 seconds or less if the code is organized.
> IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
Are you sure it's done correctly? Take something like timestamps, or validations: It's easy to get those wrongs.
Does the "<business logic>" part involve a bunch of irreducible boilerplate too or something? What's it like?
If it doesn't, then I feel like even in the JavaScript world of 2015 you could write "app.put("/mypath", business_logic)" and that would do the trick, and that was a very immature language ecosystem.
I'm slowed down (but perhaps sped up overall due to lower rewrites/maintenance costs) on important bits because the space of possibilities/capabilities is expanded, and I'm choosing to make use of that for some load bearing pieces that need to be durable and high quality (along the metrics that I care about). It takes extra time to search that space properly rather than accept the first thing that compiles and passes tests. So arguably equal or even lower velocity, but definitely improved results compared to what I used to be capable of, and I'm making that trade-off consciously for certain bits. However that's the current state of affairs, who knows what it'll look like in 1-2 years.
Where I work there are like 2x as many front end developers as there is need for. They spend an insane amount of time doing meetings, they require approval of 2 different people for every simple CSS change.
Their job is to do meetings, and occasionally add a couple of items to the HTML, which has been mostly unchanged for the past 10 years, save for changing the CSS and updating the js framework they use.
I’m building a moderately complex system with FastAPI + PG + Prefect executing stuff on Cloud Run, and so long as I invest in getting the architecture and specs right, it’s really a dream how much of the heavy lifting and grunt work I can leave to Claude Code. And thank god I don’t have to manage Alembic by myself.
i just had it do a "set up the company styled auth, following a few wikis and a lot of trial and error until you get to the right thing"
in the olden days, id imagine getting that right to take about a week and a half and something everyone hated about spinning up a new service
with the LLM, i gave it a feedback loop of being able to do an initial sign in, integration test running steps with log reading on the client side, and a deploy and log reading mechanism for the server side.
i was going to write out an over-seer-y script for another LLM to trigger the trial and error script, but i ended up just doing that myself. What i skipped was the needing to run any one of the steps, and instead i got nicely parsed errors, so i could go look for wikis on what parts of the auth process i was missing and feed in those wiki links and such to the trial and error bot. i skipped all the log reading/parsing to get to what the next actionable chunk is, and instead, i got to hang around in the sun for a bit while the LLM churned on test calls and edits.
im now on a cleanup step to turn the working code into nicely written code that id actually want commited, but getting to the working code stage took very little of my own effort; only the problem solving and learning about how the auth works
Greenfield development of small web apps. I’m familiar enough with everything that I can get something up and running on my own, but I don’t do it regularly so I need to read a lot of docs to be up to date. I can describe the basic design and requirements of an app and have something like Claude Code spit out a prototype in a couple of hours
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
There's a local website that sells actual physical Blu-rays. Their webshite is a horror show of Javascript.
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)