I think there are two groups of people emerging. deep / fast / craft-and-decomposition-loving vs black box / outcome-only.
I've seen people unable to work at average speed on small features suddenly reach above average output through a llm cli and I could sense the pride in them. Which is at odds with my experience of work.. I love to dig down, know a lot, model and find abstractions on my own. There a llm will 1) not understand how my brain work 2) produce something workable but that requires me to stretch mentally.. and most of the time I leave numb. In the last month I've seen many people expressing similar views.
ps: thanks everybody for the answers, interesting to read your pov
I get what you're saying, but I would say that this does not match my own experience. For me, prior to the agentic coding era, the problem was always that I had way more ideas for features, tools, or projects than I had the capacity to build when I had to confront the work of building everything by hand, also dealing with the inevitable difficulties in procrastination and getting started.
I am a very above-average engineer when it comes to speed at completing work well, whether that's typing speed or comprehension speed, and still these tools have felt like giving me a jetpack for my mind. I can get things done in weeks that would have taken me months before, and that opens up space to consider new areas that I wouldn't have even bothered exploring before because I would not have had the time to execute on them well.
The sibling comments (from remich and sanufar) match my experience.
1. I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.
2. There isn't a binary between having an LLM generate all the code and writing it all myself.
3. I still do most of the design work because LLMs often make questionable design decisions.
4. Sometimes I simply want a program to solve a problem (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.
> I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.
My usual thought is that boilerplate tells me, by existing, where the system is most flawed.
I do like the idea of having a tool that quickly patches the problem while also forcing me to think about its presence.
> There isn't a binary between having an LLM generate all the code and writing it all myself. I still do most of the design work because LLMs often make questionable design decisions.
One workflow that makes sense to me is to have the LLM commit on a branch; fix simple issues instead of trying to make it work (with all the worry of context poisoning); refactor on the same branch; merge; and then repeat for the next feature — starting more or less from scratch except for the agent config (CLAUDE.md etc.). Does that sound about right? Maybe you do something less formal?
> Sometimes I simply want a program to solve a purpose (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.
I think for me, the difference really comes down to how much ownership I want to take in regards to the project. If it’s something like a custom kernel that I’m building, the real fun is in reading through docs, learning about systems, and trying to craft the perfect abstractions; but if it’s wiring up a simple pipeline that sends me a text whenever my bus arrives, I’m happy to let an LLM crank that out for me.
I’ve realized that a lot of my coding is on this personal satisfaction vs utility matrix and llms let me focus a lot more energy onto high satisfaction projects
> deep / fast / craft-and-decomposition-loving vs black box / outcome-only
As a (self-reported) craft-and-decomposition lover, I wouldn't call the process "fast".
Certainly it's much faster than if I were trying to take the same approach without the same skills; and certainly I could slow it down with over-engineering. (And "deep" absolutely fits.) But the people I've known that I'd characterize as strongly "outcome-only", were certainly capable of sustaining some pretty high delta-LoC per day.
That's kind of the point here. Once a dev reached a certain level, they often weren't doing much "relaxing code typing" anyways before the AI movement. I don't find it to be much different than being a tech lead, architect, or similar role.
As a former tech lead and now staff engineer, I definitely agree with this. I read a blog post a couple of months ago that theorized that the people that would adopt these technologies the best were people in the exact roles that you describe. I think because we were already used to having to rely on other people to execute on our plans and ideas because they were simply too big to accomplish by ourselves. Now that we have agents to do these things, it's not really all that different - although it is a different management style working around their limitations.
Exactly. I've been a tech lead, have led large, cross-org projects, been an engineering manager, and similar roles. For years, when mentoring upcoming developers what I always to be the most challenging transition was the inflection point between "I deliver most of my value by coding" to "I deliver most of my value by empowering other people to deliver". I think that's what we're seeing here. People who have made this transition are already used to working this way. Both versions have their own quirks and challenges, but at a high level it abstracts.
LLMs are just a programming language/compiler/REPL, though, so there is nothing out of the ordinary for developers. Except what is different is the painfully slow compile time to code ratio. You write code for a few minutes... and then wait. Then spend a few more minutes writing code... and then wait. That is where the exhaustion comes from.
At least in the olden days[1] you could write code for days before compiling, which reduced the pain. Long compilation times has always been awful, but it is less frustrating when you could defer it until the next blue moon. LLMs don't (yet) seem to be able to handle that. If you feed them more than small amounts of code at a time they quickly go off the rails.
With that said, while you could write large amounts of code and defer it until the next blue moon, it is a skill to be able to do that. Even in C++, juniors seem to like to write a few lines of code and then turn to compiling the results to make sure they are on the right track. I expect that is the group of people who is most feeling at home with LLMs. Spending a few minutes writing code and then waiting on compilation isn't abnormal for them.
But presumably the tooling will improve with time.
Well designed ones do, at least. LLMs, in their infancy, still bring a lot of undefined behaviour, which is you end up stuck in the code for a few minutes -> compile -> wait -> repeat cycle. But that is not a desirable property and won't remain acceptable as the technology matures.
It is quite possible the tools will never improve beyond where they sit today, sure, but then usage will naturally drift away from that fatiguing use (not all use, obviously). The constant compile/wait cycle is exhausting exactly because it is not productive.
Businesses are currently willing to accept that lack of productivity as an investment into figuring out how to tame the tools. There is a lot of hope that all the problems can be solved if we keep trying to solve them. And, in fairness, we have gotten a lot closer than we were just a year or so ago towards that end, so the optimism currently remains strong. However, that cannot go on forever. At some point the investment has to prove itself, else the plug will be pulled.
And yes, it may ultimately be a dead end. Absolutely. It wouldn't be the first failure in software development.
Ya know, I have to admit feeling something like this. Normally, the amount of stuff I put together in a work day offers a sense of completion or even a bit of a dopamine bump because of a "job well done". With this recent work I've been doing, it's instead felt like I've been spending a multiplier more energy communicating intent instead of doing the work myself; that communication seems to be making me more tired than the work itself. Similar?
I forget where I saw this (a Medium post, somewhere) but someone summed this up as "I didn't sign up for this just to be a tech priest for the machine god".
Someone commented yesterday that managers and other higher-ups are "already ok with non-deterministic outputs", because that's what engineers give them.
As a manager/tech-lead, I've kind of been a tech priest for some time.
Which is why it's so funny to hear seasoned engineers lament the probabilistic nature of AI systems, and how you have to be hand setting code to really think about the problem domain.
They seem to all be ICs that forget that there are abstraction layers above them where all of that happens (and more).
You’re possibly not entering into the flow state anymore.
Flow is effortless. and it is rejuvenating.
I believe:
While communication can be satisfying, it’s not as rejuvenating as resting in our own Being and simply allowing the action to unfold without mental contraction.
Flow states.
When the right level of challenge and capability align and you become intimate with the problem. The boundaries of me and the problem dissolve and creativity springs forth. Emerging satisfied. Nourished.
This is why I think LLMs will make us all a LOT smarter. Raw code made it so we stopped heavily thinking in between but now it's just 100% the most intense thought processes all day long.
It seems pretty obvious that the opposite is true. I know I’ve experienced some serious skill atrophy that I’m now having to actively resist. There’s a lot lost by no longer having to interact with the raw materials of your craft.
Thinking is a skill that is reinforced by reading, designing and writing code. When you outsource your thinking to an LLM your ability to think doesn’t magically improve…it degrades.
Yes it's taxing and mentally draining, reading code and connecting dots is always harder than writing it.
And if you let the AI too loose, as when you try to vibe code an entirely new program, I end up in the situation where in 1 day I have a good prototype and then I can spend easily 5 times as much sorting the many issues and refactoring in order to have it scale to the next features.
So far what I've been doing is, I look for the parts that seem like they'd be rewarding to code and I do them myself with no input from the machine whatsoever. It's hard to really understand a codebase without spending time with the code, and when you're using a model, I think there's a risk of things changing more quickly than you can internalize them. Also, I worry I'll get too comfortable bossing chatbots around & I'll become reluctant to get my hands dirty and produce code directly. People talk about ruining their attention spans by spending all their time on TikTok until they can no longer read novels; I think it'd be a real mistake to let that happen to my professional skill set.
Nah, I don’t miss at all typing all the tests, CLIs, and APIs I’ve created hundreds of times before. I dunno if I it’s because I do ML stuff, but it’s almost all “think a lot about something, do some math, and and then type thousands of lines of the same stuff around the interesting work.”
> When I should be engaging with the real world and people in my life.
While I appreciate the sentiment, I don't think you need to couple "offline interaction" with this criticism. As a neurodivergent person in more than one way I appreciate being able to interact with people that face similar challenges to me and understand me. The problem is that social media is increasingly designed to not facilitate that, but content distribution.
Yeah. Let's not fetishize "real world". Offline space is often boring and most people suck. There's a reason why we prefer to be looking at the screens. Having said that, I think that it makes sense to be more cautious about screen time and interact more with the offline space. Not because offline space is better, but because our brains are fried and they peceive online space to be better than it really is, we're literal addicts. I'm trying to teach myself that it's okay to be bored.
I guess when enough people get addicted to something, it's no longer considered to be an addiction.
When I am people watching, sometimes I am reminded of the TNG episode of star trek where everyone gets addicted to a game. Non-players grow concerned until they all succumb.
I don't get the hangup on having multiple accounts. The point of AP is that you can read any kind post, not create any kind of post.
There is a truth to consumption over communication, but that is in the destruction of third spaces and the increasing difficulty in making friends. The fediverse is a force against that in my view, even if it has some of the creature comforts of regular social media.
It seems insane that nobody at the other end runs something as simple as MAT or imagick (twice) over it to take the text layers out before uploading though. I hope this is at least partially intentional.
Yeah. This is a reaction to providers like blacksmith or self-hosted solutions like the k8s operator being better at operating their very bad runner then them, at cheaper prices, with better performance, more storage and warm caches. The price cut is good, the anticompetitive bit where they charge you to use computers they don't provide isn't. My guess is that either we're all gonna move to act or that one of the SaaS startups sue.
I always get the impression that whenever military/police have the option to turn off ADS-B, they do. Not just in the US or by US forces. Not just on sensitive flights. I don't think the toggle ever gets used.
Not really. I live next to an airport with both a civilian and military presence (and an alternate for a NATO airbase). The number of military/police flights that I can only see on MLAT is pretty worrying. I don't think BPol has ever turned their "stealth" switch off.
That "AI is just like your coworker / friend / companion" view is intentionally created by people who need the bubble part of AI to go as far as possible.
I only know about this TED Talk because it was in a mandatory training course for my work, but these is the kinds of things being said: https://www.youtube.com/watch?v=KKNCiRWd_j0 (he's trying really hard with the Steve Jobs look)
Indeed. I only recognize the name from some pretty bad armchair psychology takes and hobby sociology on trans people a few years back. Nothing compared to what happens on X these days of course.
reply