I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.
The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.
Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.
But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates. And then they backfill their knowledge over time.
My first serious foray into software wasn't even Ruby. It was Ruby on Rails. I built some popular services without knowing how anything worked. There was always a gem (lib) for it. And Rails especially insulated the workings of anything.
An S3 avatar upload system was `gem install carrierwave` and then `mount_uploader :avatar, AvatarUploader`. It added an avatar <input type="file"> control to the User form.
But it's not satisfying to stay at that level of ignorance very long, especially once you've built a few things, and you keep learning new things. And you keep wanting to build different things.
Why wouldn't this be the case for people using LLM like it was for everyone else?
It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that. You get better, you learn more, and you become the question-answerer. And one day you sheepishly look at your question history in amazement at how far you've come.
> Why wouldn't this be the case for people using LLM like it was for everyone else?
I feel like it's a bit different this time because LLMs aren't just an abstraction.
To make an analogy: Ruby on Rails serves a similar role as highways—it's a quick path to get where you're going, but once you learn the major highways in a metro area you can very easily break out and explore and learn the surface streets.
LLMs are a GPS, not a highway. They tell you what to do and where to go, and if you follow them blindly you will not learn the layout of the city, you'll just learn how to use the GPS. I find myself unable to navigate a city by myself until I consciously force myself off of Google Maps, and I don't find that having used GPS directions gives me a leg up in understanding the city—I'm starting from scratch no matter how many GPS-assisted trips I've taken.
I think the analogy helps both in that the weaknesses in LLM coding are similar and also that it's not the end of the world. I don't need to know how to navigate most cities by memory, so most of the time Google Maps is exactly what I need. But I need to recognize that leaning on it too much for cities that I really do benefit from knowing by heart is a problem, and intentionally force myself to do it the old-fashioned way in those cases.
I think also a critical weakness is that LLMs are trained on the code people write ... and our code doesn't annotate what was written by a human and what was suggested by a tool. In your analogy, this would be like if your sat nav system suggests that you turn right where other people have turned right ... because they were directed to turn by their sat nav.
In fact, I'm pretty sure this already happens and the results are exactly what you'd expect. Some of the "alternate routes" Google Maps has suggested for me in the past are almost certainly due to other people making unscheduled detours for gas or whatever, and the algorithm thinks "oh this random loop on a side street is popular, let's suggest it". And then anyone silly enough to follow the suggestion just adds more signal to the noise.
Google Maps has some strange feedback loops. I frequently drive across the Bay Bridge to Delaware beaches. There are 2 or 3 roughly equal routes with everyone going to the same destination. Google will find a "shorter" route every 5 minutes. Naturally, Maps is smart enough to detect traffic, but not smart enough to equally distribute users to prevent it. It creates a traffic jam on route A, then tells all the users to use route B which causes a jam there, and so on.
It hadn't even occurred to me that there are places where enough people are using Google Maps while driving to cause significant impact on traffic patterns. Being car-free (and smartphone-free) really gives a different perspective.
Not OP: I have a smartphone for my own personal use but don't use it for work at all. If my employer wants me to use specific phone apps they can provide one for me like they do a laptop.
yeah so you got a smartphone, the dude was saying he doesn't have a smartphone.
no smartphone make it a real pain in the ass to do most things nowadays, job is one of the biggest but it's not the least. to even connect to my bank account on my computer i need a phone.
Cell phones exist which are not smartphones, and everyone who uses phones for 2FA is happy to send 2FA codes to a "dumb" phone. They only have your phone number, after all.
Long before any use of LLMs, OsmAnd would direct you, if you were driving past Palo Alto, to take a congested offramp to the onramp that faced it across the street. There is no earthly reason to do that; just staying on the freeway is faster and safer.
So it's not obvious to me that patently crazy directions must come from watching people's behavior. Something else is going on.
In Australia the routes seem to be overly influenced by truck drivers, at least out of the cities. Maps will recommend you take some odd town bypass when just going down Main Street is easier.
I imagine what you saw is some other frequent road users making choices that get ranked higher.
> if you were driving past Palo Alto, to take a congested offramp to the onramp that faced it across the street
If you're talking about that left turn into Alma with the long wait instead of going into the Stanford roundabout and then the overpass, it still does that.
I've seen this type of thing with OsmAnd too. My hypothesis is that someone messed up when drawing the map, and made the offramp an extension of the highway. But I haven't actually verified this.
I'm not talking about use of traffic data. In the abstract, assuming you are the only person in the world who owns a car, that route would be a very bad recommendation. Safety concerns would be lower, but still, there's no reason you'd ever do that.
safety concerns would probably actually be higher since the most dangerous places on roads are areas where traffic crosses and conflicts (the road you cross to get from the offramp to onramp)
An example I notice frequently on interstate highways that go through large cities is GM suggesting you get off on an exit, take the access road and skip one of no exits, then get back on at the next ramp. It does it especially often during rush hour traffic. Denver is where I've noticed it the most, but it's not limited to that area by any means.
That already happens. Maps directs you to odd nonsense detours somewhat frequently now, that you get better results by overriding the machine. It's going down the way of web search.
The problem is now that the LLM GPS will lead you to the wrong place once a day on average, and then you still need either open the map and study where you are and figure out the route, or refine the destination address and pray it will bring you to the correct place. Such a great analogy!
Strangely this reminds me of exactly how you would navigate in parts of India before the Internet became ubiquitous.
The steps were roughly: Ask a passerby how to get where you want to go. They will usually confidently describe the steps, even if they didn't speak your language. Cheerfully thank them and proceed to follow the directions. After a block or two, ask a new passerby. Follow their directions for a while and repeat. Never follow the instructions fully. This triangulation served to naturally fill out faulty guidance and hucksters.
Never thought that would one day remind me of programming.
Indeed. My experience in India is that people are friendly and helpful and try to help you in a very convincing way, even so when they don't know the answer. Not so far off LLM user experience.
What is “your GPS” meant here. With Google Maps and Apple Maps it consistently picks the closest one (this being within minutes to both but much closer to one), which seems reasonable.
Maybe not ideal as when either of these apps will bring up a disambiguation for a super market chain or similar, but I’m not witnessing randomness.
To be clear, above i was talking about LLMs. Randomness in real GPS usage is something I have never encountered in using Google maps already since 15 years or so. 99 percent of the time it brings/brought me exactly where i want to be, even around road works or traffic jams. It seems some people have totally different experiences, so odd.
Perhaps they have improved their heuristic for this one, though perhaps it was actually Uber/Lyft that randomly picks one when given as a destination...
I'm the kind of guy who decently likes maps, and I pay attention to where I'm going and also to the map before, during, and after using a GPS (Google maps). I do benefit from Google maps in learning my way around a place. It depends on how you use it. So if people use LLMs to code without trying to learn from it and just copy and paste, then yeah, they're not going to learn the skills themselves. But if they are paying attention to the answers they are getting from the LLMs, adjusting things themselves, etc. then they should be able to learn from that as well as they can from online code snippets, modulus the (however occasional) bad examples from the LLM.
> I do benefit from Google maps in learning my way around a place.
Tangent: I once got into a discussion with a friend who was surprised I had the map (on a car dashboard display) locked to North-is-up instead of relative to the car's direction of travel.
I agreed that it's less-convenient for relative turn decisions, but rationalized that setting as making it easier to learn the route's correspondence to the map, and where it passed relative to other landmarks beyond visual sight. (The issue of knowing whether the upcoming turn was left-or-right was addressed by the audio guidance portion.)
It's neat to hear that I'm not the only one who does this. It makes a night-and-day difference for me.
When the map is locked north, I'm always aware of my location within the larger area, even when driving somewhere completely new.
Without it, I could never develop any associations between what I'm seeing outside the windshield and a geospatial location unless I was already familiar with the area.
One small change to using a GPS radically impacts how much you know about the area -- do you use the default, always-forward view, or do you use the slightly-less-usable always-north setting? If you use the latter, you will find that you learn far more about the city layout while still benefitting from the GPS functionality itself.
I think LLMs are similar. Sure, you can vibe code and blindly accept what the LLM gives you. Or, you can engage with it as if pair programming.
The code you can inspect is analogous to directions on a map. Some have noted in this thread that for them directions on a map actually do help them learn the territory. I have found that they absolutely do not help me.
That's not for lack of curiosity, it seems to be something about the way that I'm wired that making decisions about where to navigate helps me to learn in a way that following someone else's decisions does not.
You have to study the map to learn from it. Zoom in and out on surroundings, look up unfamiliar landmarks, et cetera. If you just follow the GPS or copy paste the code no, you won’t learn.
The problem is that coders taking this approach are dominantly ones who lack the relevant skill - ones who are taking that approach because they lack that skill.
Difference here being that you actually learned the information about Ruby on Rails, whereas the modern programmer doesn't learn anything. They are but a clipboard-like vessel that passes information from an LLM onto a text editor, rarely ever actually reading and understanding the code. And if something doesn't work, they don't debug the code, they debug the LLM for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.
I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you ask them to explain their rationale for the solution they came up with - dead silence. They can't, because they never actually _thought_.
> I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo.
Some also get into a loop where they ask the LLM to rewrite what they have, and the result ends up changing in a subtle undetected way or loses comments.
Difference here being that you actually learned the information about computers, whereas the modern programmer doesn't learn anything. They are but a typist-like vessel that passes information from an architect onto a text editor, rarely ever actually reading and understanding the compiled instructions. And if something doesn't work, they don't debug the machine code, they complain about the compiler for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.
I've had to work with developers that are over dependent on high-level languages. One didn't even know how to trace execution in machine code; they had to ask a debugger. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you explain them to explain their memory segmentation strategy - dead silence. They can't, because they never actually _thought_.
No, it really isn't at all comparable like that (and other discussion in the thread makes it clear why). Users of high-level languages clearly still do write code in those languages, that comes out of their own thought rather than e.g. the GoF patterns book. They don't just complain about compilers; they actually do debug the high-level code, based on the compiler's error messages (or, more commonly, runtime results). When people get their code from LLMs, however, you can see very often that they have no idea how to proceed when the code is wrong.
Debugging is a skill anyone can learn, which applies broadly. But some people just don't. People who want correct code to be written for them are fundamentally asking something different than people who want writing correct code to be easier.
Abstractions on top of abstractions on top of turtles...
It'll be interesting to see what kinds of new tools come out of this AI boom. I think we're still figuring out what the new abstraction tier is going to be, but I don't think the tools to really work at that tier have been written yet.
I think you're right; I can see it in the accelerating growth curve of my good Junior devs; I see grandOP's vision in my bad Junior devs. Optimistically, I think this gives more jr devs more runway to advance deeper into more sophisticated tech stacks. I think we're gonna need more SW devs, not fewer, as these tools get better: things that were previously impossible will be possible.
> I think we're gonna need more SW devs, not fewer
Code is a liability. What we really care about is the outcome, not the code. These AI tools are great at generating code, but are they good at maintaining the generated code? Not from what I've seen.
So there's a good chance we'll see people using tools to generate a ton of instant legacy code (because nobody in house has ever understood it) which, if it hits production, will require skilled people to figure out how to support it.
We will see both: lots of poor code, lots of neutral code (LLMs cranking out reasonably well written boilerplate), and even some improved code (by devs who use LLMs to ferret out inefficiencies and bugs in their existing, human-written codebase).
This is no different from what we see with any tool or language: the results are highly dependent on the experience and skills of the operator.
You've missed my core point if you think those isn't different. Before AI there was always someone who understood the code/system.
In the a world where people are having machines build the entire system, there is potentially no human that has ever understood it. Now, we are talking about a yet unseen future; I have yet to see a real world system that did not have a human driving the design. But, maintaining a system that nobody has ever understood could be ultra-hardmode.
Humans will always have a hand in the design because they need to explain the real-world constraints to the AI. Sure, the code it produces may be complex, but if the AI is really as smart as you're claiming it will eventually be, then it will also have the ability to explain how the code works in plain English (or your human language of choice). Even today, LLMs are remarkably good at summarizing what code does.
Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?
> Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?
- There is zero option of paying an obscene amount of money to find the person and make the problem 'go away'
- There is a non-zero possibility that the code is not understandable by any developer you can afford. By this I mean that the system exhibits the desired behavior, but is written in such a way that only someone like Mike Pall* can understand.
Please don't do this, pick more boring tech stacks https://news.ycombinator.com/item?id=43012862 instead. "Sophisticated" tech stacks are a huge waste, so please save the sophisticated stuff for the 0.1% of the time where you actually need it.
The dictionary definition of 'sophisticated' is "changed in a deceptive or misleading way; not genuine or pure; unrefined, adulterated, impure." Pretty much the polar opposite of "boring" in a technology context.
An edge case in startups is something that provides a competitive advantage. When you run a startup, you have to do something different from the way everyone else does, or you’ll get the same results everyone else does. My theory is that some part of a startup’s operations should be cutting edge. HR processes, programming stack, sales cycle, something.
That's great advice when you're building a simple CRUD app - use the paved roads for the 10^9th instance.
It's terrible advice when you're building something that will cause that boring tech to fall over. Or when you've reached the limits of that boring tech and are still growing. Or when the sophisticated tech lowers CPU usage by 1% and saves your company millions of dollars. Or when that sophisticate tech saves your engineers hours and your company 10s of millions. Or just: when the boring tech doesn't actually do the things you need it to do.
"Boring" tech stacks tend to be highly scalable in their own right - certainly more so than the average of trendy newfangled tech. So what's a lot more likely is that the trendy newfangled tech will fail to meet your needs and you'll be moving to some even newer and trendier tech, at surprisingly high cost. The point of picking the "boring" choice is that it keeps you off that treadmill.
I'm not disagreeing with anything you said here - reread my comment.
Sometimes you want to use the sophisticated shiny new tech because you actually need it. Here's a recent example from a real situation:
The linux kernel (a boring tech these days) has a great networking stack. It's choking on packets that need to be forwarded, and you've already tuned all the queues and the cpu affinities and timers and polling. Do you -
a) buy more servers and network gear to spread your packets across more machines? (boring and expensive and introduces new ongoing costs of maintenance, datacenter costs, etc).
b) Write a kernel module to process your packets more efficiently? (a boring, well known solution, introduces engineer costs to make and maintain as well as downtime because the new shiny module is buggy?)
c) Port your whole stack to a different OS (risky, but choosing a different boring stack should suffice... if youre certain that it can handle the load without kernel code changes/modules).
d) Write a whole userspace networking system (trendy and popular - your engineers are excited about this, expensive in eng time, risks lots of bugs that are already solved by the kernel just fine, have to re-invent a lot of stuff that exists elsewhere)
e) Use ebpf to fast path your packets around the kernel processing that you don't need? (trendy and popular - your engineers are excited about this, inexpensive relative to the other choices, introduces some new bugs and stability issues til the kinks are worked out)
We sinned and went with (e). That new-fangled tech met our needs quite well - we still had to buy more gear but far less than projected before we went with (e). We're actually starting to reach limits of ebpf for some of our packet operations too so we've started looking at (d) which has come down in costs and risk as we understand our product and needs better.
I'm glad we didn't go the boring path - our budget wasn't eaten up with trying to make all that work and we could afford to build features our customers buy instead.
We also use postgres to store a bunch of user data. I'm glad we went the boring path there - it just works and we don't have to think about it, and that lack of attention has afforded us the chance to work on features customers buy instead.
The point isn't "don't choose boring". It's: blindly choosing boring instead of evaluating your actual needs and options from a knowledgeable place is unwise.
None of these seem all that 'trendy' to me. The real trendy approach would be something like leaping directly to a hybrid userspace-kernelspace solution using something like https://github.com/CloudNativeDataPlane/cndp and/or the https://www.kernel.org/doc/html/latest/networking/af_xdp.htm... addressing that the former is built on. Very interesting stuff, don't get me wrong there - but hardly something that can be said to have 'stood the test of time' like most boring tech has. (And I would include things like eBPF in that by now.)
I have similar examples from other projects of using io_uring and af_xdp with similar outcomes. In 2020 when the ebpf decision was made it was pretty new an trendy still too... in a few cases each of these choices required us to wait for deployment until some feature we chose to depend on landed in a mainline kernel. Things move a bit slower that far down the stack so new doesn't mean "the js framework of the week", but it's still the trendy unproven thing vs the well-known path.
The point is still: evaluate the options for real - using the new thing because it's new and exicting is equally as foolish as use the boring thing because it's well-proven... if those are your main criteria.
I agree with this stance. Junior developers are going to learn faster than previous generations, and I'm happy for it.
I know that is confronting for a lot of people, but I think it is better to accept it, and spend time thinking about what your experience is worth. (A lot!)
> Junior developers are going to learn faster than previous generations, and I'm happy for it.
How? Students now are handing out LLM homework left and right. They are not nurturing the resolve to learn.
We are training a cohort of youngs who will give up without trying hard, and end learning nothing
> "But it's not satisfying to stay at that level of ignorance very long"
It's not about satisfaction: it's literally dangerous and can bankrupt your employer, cause immense harm to your customers and people at home, and make you unhirable as an engineer.
Let's take your example of "an S3 avatar upload system", which you consider finished after writing 2 lines of code and a couple of packages installed. What makes sure this can't be abused by an attacker to DDOS your system, leading to massive bills from AWS? What happens after an attacker abuses this system and takes control of your machines? What makes sure those avatars are "safe-for-work" and legal to host in your S3 bucket?
People using LLMs and feeling all confident about it are the equivalent of hobby carpenters after watching a DIY video on YouTube and building a garden shed over the weekend. You're telling me they're now qualified to go build buildings and bridges?
> "It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that."
I meet people like this during job interviews all of the time, if I'm hiring for a position. Can't tell you how many people with 10+ years of industry experience I met recently that can't explain how to read data from a local file, from the machine's file system.
At present, LLMs are basically Stack Overflow with infinite answers on demand... of Stack Overflow quality and relevance. Prompting is the new Googling. It's a critical base skill, but it's not sufficient.
The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.
And most relevantly, they currently don't scale to large codebases. They're not autonomous enough to pull a work item off the queue, make changes across a 100kloc codebase, debug and iterate, and submit a PR. But they can help a lot with each individual part of that workflow when focused, so we end up in the perverse situation where junior devs act as the machine's secretary, while the model does most of the actual programming.
So we end up de-skilling the junior devs, but the models still can't replace the principal devs and researchers, so where are the principal devs going to come from?
>The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.
I tend towards tool development, so this suggests a fringe benefit of LLMs to me: if my users are asking LLMs to help with a specific part of my API, I know that's the part that sucks and needs to be redesigned.
>Why wouldn't this be the case for people using LLM like it was for everyone else?
Because of the mode of interaction.
When you dive into a framework that provides a ton of scaffolding, and "backfill your knowledge over time" (guilty! using Nikola as a SSG has been my entry point to relearn modern CSS, for example), you're forced to proceed by creating your own loop of experimentation and research.
When you interact with an LLM, and use forums to figure out problems the LLM didn't successfully explain to you (about its own output), you're in chat mode the whole time. Even if people are willing to teach you to fish, they won't voluntarily start the lesson, because you haven't shown any interest in it. And the fish are all over the place - for now - so why would you want to learn?
>It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that.
Of course nobody on HN would relate to that first-hand. But as someone with extensive experience curating Stack Overflow, I can assure you I have seen it second-hand many times.
> But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates.
The article is right in a zoomed-in view (fundamental skills will be rare and essential), but in the big picture the critique in the comment is better (folks rarely start on nand gates). Programmers of the future will have less need to know code syntax the same way current programmers don't have to fuss with hardware-specific machine code.
The people who still do hardware-specific code, are they currently in demand? The marketplace is smaller, so results will vary and probably like the article suggests, be less satisfactory for the participant with the time-critical need or demand.
First of all, I think the problems of the industry were long overdue.
It started with Twitter and it proved it can be done.
AI just made it easier psychologically because it's much easier to explore and modify existing code and not freak out "omg, omg, omg, we've lost this guy and ony he understands the code and we're so lost without him".
AI just removes the incentives to hoard talent.
I also think of excel/spreadsheets and how it did in fact change accounting industry forever.
Every claim the author makes about software developers could have been made about accounting after the advent of electronic spreadsheets.
I don't want to even get started on the huge waste and politics in the industry. I'm on the 3rd re-write of a simple task that removes metrics in Grafana which saves the team maybe 50$ monthly.
If the team was cut in half, I'm sure we'd simply not do half the bullshit "improvements" we do.
Great points. I see my journey from an offshore application support contractor to full time engineer and learning a lot along the way. Along the journey I've seen folks who held good/senior engineering roles just stagnated or moved to management role.
Industry is now large enough to have all sort of people. Growing, stagnating, moving out, moving in, laid off, retiring early, or just plain retiring etc.
This is a great point. I remember my first computer programming class was Java. The teacher said “just type public static void main( string[] args) at the top”. I asked why and he said it didn’t matter for now just memorize that part. That was great advice. At that point it was more important to get a feel for how computers behave and how programs are structured on a high level. I just kept typing that cryptic line mindlessly on top of all my programs so that I could get to the other stuff. Only many months later I looked into the mysterious line and understood what all the keywords meant.
It’s funny now that I haven’t programmed Java for more than a decade and the “public static void main” incantation is still burned into my memory.
I agree, and I also share your experience (guess I was a bit earlier with PHP).
I think what's left out though is that this is the experience of those who are really interested and for whom "it's not satisfying" to stay there.
As tech has turned into a money-maker, people aren't doing it for the satisfaction, they are doing it for the money. That appears to cause more corner cutting and less learning what's underneath instead of just doing the quickest fix that SO/LLM/whatever gives you.
I'm not so sure, I think a junior dev on my team might be being held back by AI, he's good at using it. However he was really struggling to do something very basic. In my view he just needs to learn that syntax and play around with it in a throw away console app. But I think AI is crutch that may distract from doing that. Then again it is utterly fantastic at explaining small bits of code so it could be an excellent teacher too.
Who the hell, in today's market, is going to hire an engineer with a tenuous grasp on foundational technological systems, with the hope that one day they will backfill?!
Yeah, my recollection of the past couple decades is many companies felt like: "Someone else will surely train up the junior developers, we'll just hire them away after they know what they're doing." This often went with an oddly-bewildered: "Wow, why is it so hard to find good candidates?"
I don't see how that trend would change much just because junior developers can use LLMs as a crutch. (Well, except when it helps them cheat at an interview that wasn't predictive of what the job really needed.)
> And then they backfill their knowledge over time.
If only. There are too many devs who've learnt to write JS or Python, and simply won't change. I've seen one case where someone ported an existing 20k C++ app to a browser app in the most unsuitable way with emscripten, where a 1100 lines of typescript do a much better job.
> But it's not satisfying to stay at that level of ignorance very long
That's the difference. This is how you feel because you like programming to some extent. Having worked closely with them, I can tell you there are many people going into bootcamps that flat out dislike programming and just heard it pays well. Some of them get jobs, but they don't want to learn anything. They just want to do as much that doesn't get them fired. They are not curious even with tasks they are supposed to do.
I don't think this is inherently wrong, as I don't feel like gatekeeping the profession if their bosses feel they add value. But this is a classic case of losing the junior > expert pipeline. We could easily find ourselves in a spot in 30 years where AI coding is rampant but there's no experts to actually know what it does.
There have been people entering the profession for (purported) money and not love of the craft for at least as long as the 20 years I've been in it. So long as there are also people who still genuinely enjoy it and take pride in doing the job well, then the junior->expert pipeline isn't lost.
I buy that LLMs may shift the proportion of those two camps. But doubt it will really eliminate those who genuinely love building things with code.
I'm not sure it takes more than a shift, though. There aren't 0 people in training to be a doctor, but we have a shortage for sure and it causes huge problems.
> I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.
Well put. There’s a similar phenomenon in industrial maintenance. The “grey tsunami.” Skilled electricians, pipefitters, and technicians of all stripes are aging out of the workforce. They’re not being replaced, and instead of fixing the pipeline, many factories are going out of business, and many others are opting to replace equipment wholesale rather than attempt repairs. Everybody loses, even the equipment vendors, who in the long run have fewer customers left to sell to.
I very much relate to the sentiment of starting out with simple tools and then backfilling knowledge gaps as I went. For me it was Excel -> Access shared DB forms -> Django web application framework -> etc. From spreadsheets, to database design, to web application development, to scaling HTTP services, and on and on it goes.
Or assuming software needs to be of a certain quality.
Software engineers 15 years ago would have thought it crazy to ship a full browser with every desktop app. That’s now routine. Wasteful? Sure. But it works. The need for low level knowledge dramatically decreased.
Isn’t this kind of thing the story of tech though?
Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work, because they’re not managing memory.
Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.
I actually sort of agree with the old C hands to some extent. I think people don’t understand how a lot of things actually work. And it also doesn’t really seem to matter 95% of the time.
I don't think the value of senior developers is so much in knowing how more things work, but rather that they've learnt (over many projects of increasing complexity) how to design and build larger more complex systems, and this knowledge mostly isn't documented for LLMs to learn from. An LLM can do the LLM thing and copy designs it has seen, but this is cargo-cult behavior - copy the surface form of something without understanding why it was built that way, and when a different design would have been better for a myriad of reasons.
This is really an issue for all jobs, not just software development, where there is a large planning and reasoning component. Most of the artifacts available to train an LLM on are the end result of reasoning, not the reasoning process themselves (the day by day, hour by hour, diary of the thought process of someone exercising their journeyman skills). As far as software is concerned, even the end result of reasoning is going to have very limited availability when it comes to large projects since there are relatively few large projects that are open source (things like Linux, gcc, etc). Most large software projects are commercial and proprietary.
This is really one of the major weaknesses of LLM-as-AGI, or LLM-as-human-worker-replacement - their lack of ability to learn on the job and pick up a skill for themselves as opposed to needing to have been pre-trained on it (with the corresponding need for training data). In-context learning is ephemeral and anyways no substitute for weight updates where new knowledge and capabilities have been integrated with existing knowledge into a consistent whole.
Just because there are these abstractions layers that happened in the past does not mean that it will continue to happen that way. For example, many no-code tools promised just that, but they never caught on.
I believe that there's a "optimal" level of abstraction, which, for the web, seems to be something like the modern web stack of HTML, JavaScript and some server-side language like Python, Ruby, Java, JavaScript.
Now, there might be tools that make a developer's life easier, like a nice IDE, debugging tools, linters, autocomplete and also LLMs to a certain degree (which, for me, still is a fancy autocomplete), but they are not abstraction layers in that sense.
I love that you brought no-code tools into this because I think it's interesting it never worked correctly.
My guess is: on one side, things like squarespace and wix get super super good for building sites that don't feel like squarespace and wix, (I'm not sure I'd want to be a pure "website dev" right now - although I think squarespace squashed a lot of that long ago) - and then very very nice tooling for "real engineers" (whatever that means).
I'm pretty handy with tech, I mean last time I built anything real was the 90s but I know how most things work pretty well. I sat down to ship an app last weekend, no sleep and Monday rolling around GCP was giving me errors and I hadn't realized one of the files the LLMs wrote looked like code but was all placeholder.
I think this is basically what the anthropic report says, automation issues happen via displacement, and displacement is typically fine, except the displacement this time is happening very rapidly (I read in different report, expecting traditionally ~80 years of displacement happens in ~10 years with AI)
Excel is a "no-code" system and people seem to like it. Of course, sometimes it tampers with your data in horrifying ways because something you entered (or imported into the system from elsewhere) just happened to look kinda like a date, even though it was intended to be something completely different. So there's that.
I've worked in finance for 20 years and this is the complete opposite of my experience. Excel is ubiquitous and drives all sorts of business processes in various departments. I've seen people I would consider Excel gurus, in that they are able to use Excel much more productively than normal users, but I've almost never seen anyone use VBA.
Huge numbers of accountants and lawyers use excel heavily knowing only the built in formula language. They will have a few "gurus" sprinkled around who can write macros but this is used sparingly because the macros are a black box and make it harder to audit the financial models.
No-code in excel is that most functions are implemented for user and user doesn’t have to know anything about software development to create what he needs and doesn’t need software developer to do stuff for him.
Right, but "no-code" implies something: programming without code. Excel is not that in any fashion. It's either programming with code or an ordinary spreadsheet application without code. You'd really have to stretch your definitions to consider it "no-code" in a way that wouldn't apply to pretty much any office application.
> Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.
The real issue here is that a lot of the modern tech stacks are crap, but won for other reasons, e.g. JavaScript is a terrible language but became popular because it was the only one available in browsers. Then you got a lot of people who knew JavaScript so they started putting it in places outside the browser because they didn't want to learn another language.
You get a similar story with Python. It's essentially a scripting language and poorly suited to large projects, but sometimes large projects start out as small ones, or people (especially e.g. mathematicians in machine learning) choose a language for their initial small projects and then lean on it again because it's what they know even when the project size exceeds what the language is suitable for.
To slay these beasts we need to get languages that are actually good in general but also good at the things that cause languages to become popular, e.g. to get something better than JavaScript to be able to run in browsers, and to make languages with good support for large projects to be easier to use for novices and small ones, so people don't keep starting out in a place they don't want to end up.
My son is a CS major right now, and since I've been programming my whole adult life, I've been keeping an eye on his curriculum. They do still teach CS majors from the "ground up" - he took system architecture, assembly language and operating systems classes. While I kind of get the sense that most of them memorize enough to pass the tests and get their degree, I have to believe that they end up retaining some of it.
I think this is still true of a solid CS curriculum.
But it’s also true that your son will probably end up working with boot camp grads who didn’t have that education. Your son will have a deeper understanding of the world he’s operating in, but what I’m saying is that from what I’ve seen it largely hasn’t mattered all that much. The bootcampers seem to do just fine for the most part.
And also these old C hands don't seem to get paid (significantly) more than a regular web-dev who doesn't care about hardware, memory, performance etc. Go figure.
The real hardcore experts should be writing libraries anyway, to fully take advantage of their expertise in a tiny niche and to amortize the cost of studying their subproblem across many projects. It has never been easier to get people to call your C library, right? As long as somebody can write the Python interface…
Numpy has delivered so many FLOPs for BLAS libraries to work on.
Does anyone really care if you call their optimized library from C or Python? It seems like a sophomoric concern.
I think the problem is that with the over-reliance on LLMs, that expertise of writing the foundational libraries that even other languages rely on, is going away. That is exactly the problem.
Yea, every progeammer should write at least a cpu emulator in their language of choice, its such a undervalued exercise that will teach you so much about how stuff really works.
You can go to the next step. I studied computer engineering not computer science in college. We designed our own CPU and then implemented it in an FPGA.
You can go further and design it out of discrete logic gates. Then write it in Verilog. Compare the differences and which made you think more about optimizations.
Older people are always going to complain about younger people not learning something that they did. When I graduated in 1997 and started working I remember some topics that were not taught but the older engineers were shocked I didn't know it from college.
We keep creating new knowledge. It is impossible to fit everything into a 4 year curriculum without deemphasizing some other topic.
I learned Motorola 68000 assembly language in college. I talked to a recent computer science graduate and he had never seen assembly before. I also showed him how I write static HTML in vi the same way I did in 1994 for my simple web site and he laughed. He showed me the back end to their web site and how it interacts with all their databases to generate all the HTML dynamically.
The universe underneath the pie is mostly made up of invariant laws that must be followed.
The OS, libraries, web browser, runtime, and JavaScript framework underneath your website are absolutely riddled with bugs, and knowing how to identify and fix them makes you a better engineer. Many junior developers get hung up on the assumption that the function they're calling is perfect, and are incapable of investigating whether that's the truth.
This is true of many of the shoulders-of-giants we're standing on, including the stack beneath python, rust, whatever...
When I was a kid I "wrote" (mostly copied from a programming magazine) a 4-bit CPU emulator on my TI-99/4a. Simple as it was, it was the light bulb coming on for me about how CPUs actually worked. I could then understand the assembly language books that had been impenetrable to me before. In college when I first started using "C", pointers made intuitive sense. It's a very valuable exercise.
I wonder about this too - and also wonder what the difference of order is between the historical shifts you mention and the one we're seeing now (or will see soon).
Is it 10 times the "abstracting away complexity and understanding"? 100, 1000, [...]?
This seems important.
There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.
> There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.
For me this happened when working on some Spring Boot codebase thrown together by people who obviously had no idea what they were doing (which maybe is the point of Spring Boot; it seems to encourage slopping a bunch of annotations together in the hope that it will do something useful). I used to be able to fix things when they went wrong, but this thing is just so mysterious and broken in such ridiculous ways that I can never seem to get to the bottom of it,
> Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work
Everything has a place, you most likely wouldn't write an HPC database in Python, and you wouldn't write a simple CRUD recipe app in C.
I think the same thing applies to using LLMS, you don't use the code it generates to control a power plant or fly an airplane. You use it for building the simple CRUD recipe app where the stakes are essentially zero.
But not really. Looking around my shop, I’m probably the only one around who used to write a lot of C code. No one is coming to ask me about obscure memory bugs. I’m certainly not getting paid better than my peers.
The knowledge I have is personally gratifying to me because I like having a deeper understanding of things. But I have to tell you I thought knowing more would give me a deeper advantage than it has in actual practice.
You're providing value every time you kill a bad idea "because things don't actually work that way" or shave a loop, you're just not tracking the value and neither is your boss.
To your employer, hiring people who know things (i.e. you) has giving them a deeper advantage in actual practice.
I would argue that your advantage right now is that YOU are the one position they can't replace with LLMs, because your knowledge requires exact fine detail on pointers and everything and needs that exact expertise. You might have toughen the same pay as your peers, but you also carry additional stability.
Is that because the languages being used at your shop have largely abstracted away memory bug issues? If you were to get a job writing embedded systems, or compilers, or OSes, wouldn't your knowledge be more highly valued and sought after (assuming you were one of the more senior devs)?
If you have genuine systems programming knowledge, usually the answer is to innovate on a particular toolchain or ship your own product (I understand you may not like business stuff though.)
Lately, I've been asking ChatGPT for answers to problems that I've gotten stuck on. I have yet to receive a correct answer from it that actually increases my productivity.
I've been able to get code working in libraries that I'm wholly unfamiliar with pretty rapidly by asking the LLM what to do.
As an example, this weekend I got a new mechanical keyboard. I like to use caps+hjkl as arrows and don't want to remap in software because I'll connect to multiple computers. Turns out there's a whole open source system for this called QMK that requires one to write C to configure the keyboard.
It's been over a decade since I touched a Makefile and I never really knew C anyway but I was able get the keyboard configured and also have some custom RGB lighting on it pretty easily by going back and forth with the LLM.
It is just very random. LLMs help me write a synthesizer using an odd synth technique in an obscure musical programming language with no problem, help me fix my broken linux system no problem but then can't do anything right with the python library pyro.
I think it is why people have such different experiences. It all depends randomly on how what you want to solve lines up with what the models are good at.
At least for the type of coding I do, if someone gave me the choice between continuing to work in a modern high-level language (such as C#) without LLM assistance, or switching to a low-level language like C with LLM assistance, I know which one I would choose.
Likewise, under no circumstances would I trade C for LLM-aided assembly programming. That sounds hellish. Of course it could (probably will?) be the case that this may change at some point. Innovations in higher-level languages aren't returning productivity improvements at anywhere close to the same rate as LLMs are, and in any case LLMs probably benefit from improvements to higher-level languages as well.
The programming is an interface to the machine.
The AI even what we know now (LLM's, Agents, RAG) will absorb all that.
It has many flaws but is still much better than most programmers.
All future programmers will be using it.
For the programmers that don't want to use it.
I think there will be literally billions of lines of unbelievably bad code generated by these 1-100 generation Ai's and junior programmers that need to be corrected and fixed.
There's no need for tens of millions of OS Kernel devs , most of us are writing business logic CRUD apps.
Also, it's not entirely clear to me why LLMs should get extremely good in web app development but not OS development, as far as I can see it's the amount and quality of training data that counts.
I think arguably there's still a quantity issue, but I'm no expert on LLMs. Plus I hear the windows source code is a bit of a nightmare. But for every windows there's a TempleOS I suppose.
It is far more likely that everything, and not just IT, but everything collapses than we make it to the point you mention.
LLMs replace entry level people who invested in education. They would have the beginning knowledge, but there's no means to become better because opportunities are non-existent because they replaced these positions. Its a sequential pipeline failure of talent development. In the meantime you have the mid and senior level people who cannot pass their knowledge on, they age out, and die.
What happens when you hit a criticality point where production which is dependent on these systems, and it can no longer continue.
The knowledge implicit in production is lost, the economic incentives have been poisoned. The distribution systems are destroyed.
How do you bootstrap recovery for something that effectively took several centuries to build in the first place, but not in centuries but in weeks/months.
If this isn't sufficient enough to explain the core of the issue. Check out the Atari/Nintendo crash, which isn't nearly as large as this but goes into the dangers of destroying your distributor networks.
If you pay attention to the details, you'll see Atari's crash was fueled by debt financing, and in the process they destroyed their distributor networks with catastrophic losses. After that crash, Nintendo couldn't get shelf-space; no distributor would risk the loss without a guarantee. They couldn't advertise as video games. They had to trojan horse the perception of what they were selling, and guarantee it. There is a documentary on Amazon which covers this, playing with power. Check it out.
One of my first bosses was a big Perl guy. I checked on what he was doing 15 years later and he was one of 3 people at Windstream handling backbone packet management rules.
You just don’t run into many people comfortable with that technology anymore. It’s one of the big reasons I go out of my way to recruit talks on “old” languages to be included at the Carolina Code Conference every year.
Most developers couldn't write an operating system to save their life. Most could not write more than a simple SQL query. They sling code in some opinionated dev stack that abstracts the database and don't think too hard about the low-level details.
I agree. It's the current generation's version of what happened with the advent of Javascript frameworks about 15 years ago, when suddenly web devs stopped learning how computers actually work. There will always be high demand for software engineers who actually know what they're doing, can debug complex code bases, and can make appropriate decisions about how to apply technology to business problems.
That said, AI agents are absolutely going to put a bunch of lower end devs out of work in the near term. I wouldn't want to be entering the job market in the next couple of years....
> There will always be high demand for software engineers who actually know what they're doing
Unfortunately they won’t be found due to horrible tech interviews focused on “culture” (*-isms), leetcode under the gun, or resume thrown in trash at first sight from lack of full degree. AMHIK.
> I wouldn't want to be entering the job market in the next couple of years....
I bet there's a software dev employment boom about 5 years away once it becomes obvious competent people are needed to unwind and rework all the llm generated code.
Except juniors are not going to be the competent people you're looking for to unwind those systems. Personally, no matter how it plays out, I feel like the entry-level market in this field is going to take a hit. It will become much more difficult and competitive.
The "prompt" engineering is also going to create a ton of cargocult tips and tricks- endless shell scripts, that do nothing but look spectacular, with one or two important commands at the end. Fractal classes, that nobody knows why they exist. Endless boilerplate.
And the ai will be trained on this- and thus cluelessness reinforced and baked in. Omnissiah, hear our prayers in the terminal for we are but -h less man (bashes keyboard with a ritual wrench).
> I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.
I hired a junior developer for a couple months and was incredibly impressed with what he was able to accomplish with a paid ChatGPT subscription on a greenfield project for me. He’d definitely struggle with a mature code base, it you have to start somewhere!
real programming of course wont go away. But in the public eye it lost its mysticism as seemingly anyone can code now. Of course that aint true and noone managed to create anything of substance by prompting alone.
How do we define real programming?
I'm working on python and JS codebases in my startup. So very high level stuff. However to reason well about everything that goes on in our code is no small feat for an LLM (or a human), if its able to take our requirements , understand the business logic and just start refactoring and creating new features on a codebase that is quite big,
well yeah, that sounds like AGI to me. In that case I don't see why it won't be able to hack on kernels.
The fact that you don't see why is the issue. Both python and JS are very permissive and their runtime env is very good. More often than not, you're just dealing with logic bugs and malformed domain data. A kernel codebase like Linux is one where there are many motivated individual trying every trick to get the computer to do something. And you're usually dealing with leaner abstractions because general safety logic is not performant enough. It's a bit like the difference between a children playground and a construction site.
> More often than not, you're just dealing with logic bugs
Definitely. More often than not you're dealing with logic bugs. So the thing solving them will sometimes have to be able to reason quite well across large code bases (not every bug of course, but quite often) to the point I don't really see how it's different than general intelligence if it can do that well. And if it gets to the point its AGIish , I don't see why it can't do Kernel work (or in the very least - reduce the amount of jobs dramatically in that space as well). Perhaps you can automate 50% of the job where we're not really thinking at all as programmers, but the other 50% (or less, or more, debatable) involves planning, reasoning, debugging, thinking. Even if all you do is python and js.
> So the thing solving them will sometimes have to be able to reason quite well across large code bases
The codebase only describes what the software can do currently, never the why. And you can't reason without both. And the why is the primary vector of changes which may completely redefines the what. And even the what have many possible interpretations. The code is only one specific how. Going from the why, to the what, to a specific how is the core tenet of programming. Then you add concerns like performance, reliability, maintainability, security...
Once you have a mature codebase, outside of refactoring and new features, you mostly have to edit a few lines for each task. Finding the lines to work one requires careful investigation and you need to carefully test after that to ensure that no other operations have been affected. We already have good deterministic tools to help with that.
I agree with this. An AI that can fully handle web dev is clearly AGI. Maybe the first AGI can't fully handle OS kernel development, just as many humans can't. But if/once AGI is achieved it seems highly unlikely to me that it will stay at the "can do web dev but can't do OS kernel dev" level for very long.
> Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
When they do this, I really want to know they did this. Like an organic food label. Right now AI is this buzzword that companies self-label with for marketing, but when that changes, I still want to see who's using AI to handle my data.
The enshittification will come for the software engineers themselves eventually, because so many businesses out there only have their shareholders in mind and not their customers, and if a broken product or a promise of a product is enough to boost the stock then why bother investing in the talent to build it properly?
Look at Google and Facebook - absolute shithouse services now that completely fail to meet the basic functionality they had ~20 years ago. Google still rakes in billions rendering ads in the style of a search engine and Facebook the same for rendering ads in the format of a social news feed. Why even bother with engineering anything except total slop?
> as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.
I'm not worried about that at all. Many young people are passionate, eager to learn and build things. They won't become suddenly dumb and lazy because they have this extra tool available to them. I think it's the opposite. They'll be better than their seniors because they'll have AI help them improving and learn faster.
Have you _seen_ the tragedy that is occurring in primary and secondary education right now with students using LLMs for the bulk of their coursework? Humans, and most forms of life in general, are lazy. They take the lowest energy route to a solution that works, whether that solution is food, shelter, or the answer to a homework question or problem at work. To some degree, this is good: An experienced <animal/student/engineer> has well-defined mental pathways towards getting what they need in as little time/energy as possible. I myself have dozens of things that I don't remember offhand, but that I remember a particular google query will get me to what I need (chmod args being the one that comes to mind). This leaves mental resources available for more important or difficult-to-acquire knowledge, like the subtle nuances of a complex system or cat pictures.
The problem is a lack of balance, and in some instances skipping the entirety of Critical Reasoning. Why go through the effort of working your way through a problem when you would rather be doing <literally anything else> with your time. Iterate on this to the extreme, with what feels like a magic bullet that can solve anything, and your skills *will* atrophy.
Of course there are exceptions to this trend. Star pupils exist in any generation who will go out of their way to discover answers to questions they have, re-derive understanding of things just for the sake of it, and apply their passions towards solving problems they decide are worth solving. The issue is the _average_ person, given an _average_ (e.g. if in America, under-funded) education, with an _average_ mentor, will likely choose the path of least resistance.
Would technical depth change the fundamental supply and demand, though? If we view AI as a powerful automation tool, it's possible that the overall demand will be lowered so much that the demand of the deep technical expertise will go down as well. Take EE industry, for instance, the technical expertise required to get things done is vast and deep, yet the demand has not been so good, compared to the software industry.
I think I've seen the comparison with respect to training data, but it's interesting to think of the presence of LLMs as a sort of barrier to developing skills akin to pre-WW2 low background radiation steel (which, fun fact, isn't actually that relevant anymore, since background radiation levels have dropped significantly since the partial end of nuclear testing)
This is so on point IMO, I feel like there is no better time than to learn more low level languages than now. Since the hype will in the end resolve around insanely technical people carrying all the major weight.
You think this like newcomers can’t use the LLM to more deeply understand these topics in addition to glue. This mindset is a fallacy as newcomers are more adept and passionate as any other generation. They have better tools and can compete just the same.
> younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together
This was true before LLMs though. A lot of people just glue javascript libraries together
I'm aware of a designer, no coding skills, who is going to turn his startup idea into an MVP using LLMs. If that ever got serious, they would need an actual engineer to maintain and improve things.
On a recent AllIn podcast[1], there was a fascinating discussion between Aaron Levie and Chamath Palihapitiya about how LLMs will (or will not) supplant software developers, which industries, total addressable markets (TAMs), and current obstacles preventing tech CEOs from firing all the developers right now. It seemed pretty obvious to me that Chamath was looking forward to breaking his dependence on software developers, and predicts AI will lead to a 90% reduction in the market for software-as-a-service (and the related jobs).
Regardless of point of view, it was an eye opening discussion to hear a business leader discussing this so frankly, but I guess not so surprising since most of his income these days is from VC investments.
yup. the good news is this should make interviewing easier; bad news is there'll be fewer qualified candidates.
the other thing, though, is that you and I know that LLMs can't write or debug operating systems, but the people who pay us and see LLMs writing prose and songs? hmm
When I see people trying to define which programmers will enjoy continued success as AI continues to improve, I often see One True Scotsman used.
I wish more would try to describe what the differentiating skills and circumstances are instead of just saying that real programmers should still be in demand.
>I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.
My experience with Stack Overflow, the Python forums, etc. etc. suggests that we've been there for a year or so already.
On the one hand, it's revolutionary that it works at all (and I have to admit it works better than "at all").
But when it doesn't work, a significant fraction of those users will try to get experienced humans to fix the problem for them, for free - while also deluding themselves that they're "learning programming" through this exercise.
> Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
Found the guy who's never worked for a large publicly-traded company :) Do you know what's out of touch with reality? Thinking that $BIG_CORP execs who are compensated based on the last three months of stock performance will do anything other than take shortcuts and cut corners given the chance.
> Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
Airplane manufacturers have proved themselves more than willing to sacrifice safety for profits. What makes you think they would stop short of using LLMs?
The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.
Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.