Let me ask: in what way have you been proven wrong?
Have you simply just seen it build a pretty CRUD application that you feel like would take you many hours to do on your own? If that's the case, then you really shouldn't be too worried about that. Being able to build a CRUD application is not what you're getting paid for.
If you've seen something that involves genuinely emulating the essence of a software engineer (i.e., knowing what users want and need), then I'd ask you to show me that example.
Knowing what users want and need is more the essence of a product manager, not a software engineer.
Software engineering is solving problems given a set of requirements, and determining the value, need and natural constraints of those requirements in a given system. Understanding users is a task interfaces with software engineering but is more on the "find any way to get this done" axis of value rather than the "here is how we will get it done" one.
I'd say what OP is referencing is that LLM's are increasingly adept at writing software that fulfills a set of requirements, with the prompter acting as a product manager. This devalues software engineers in that many somewhat difficult technical tasks, once the sole domain of SWEs is not commodified via agentic coding tools.
That's a dangerous distinction in the AI era. If you reduce your work to solving problems given a set of requirements, you put yourself in direct competition with agents. LLMs are perfect for taking a clear spec and outputting code. A "pure" engineer who refuses to understand the product and the user risks becoming just middleware between the PM and the AI. In the future, the lines between PM and Tech Lead will blur, and the engineers who survive will be those who can not only "do as told" but propose "how to do it better for the business"
> Software engineering is solving problems given a set of requirements, and determining the value, need and natural constraints of those requirements in a given system
That’s the description of a mid level code monkey according to every tech company with leveling guidelines and easily outsourced and commoditized before the age of AI.
And most of the 3 million developers working in the
US aren’t working for a FAANG and will never make over $200K inflation adjusted. If you look at the comp of most “senior developers” outside of FAANG and equivalent, you’ll see that the comp has verb stagnant and hasn’t kept up with inflation for a decade.
I have personally given the thumbs down to two developers who came from a FAANG when it was clear that they were just code monkeys who had to have everything handed to them.
Have you looked at how hard it is for mid level code monkeys even from a FAANG to get a job these days? Just being able to reverse a b tree isn’t enough anymore.
FWIW, I did a 3.5 year stint at AWS until late 2023 Professional Services (full time with the same 4 year comp structure as software devs get). But made about 20% less and it was remote the full time I was there. and I’m very well aware of what software developers make.
I still work full time at a consulting company (cloud + app dev). And no FAANG doesn’t pay enough difference than what I make now to give up remote work in state tax free relatively low cost of living central Florida at 50 years old and grown (step)kids
Great, then we can use AI to solve the problems given a set of requirements, and spend more time thinking about what the requirements are by understanding the users.
PM and software development will converge more and more as AI gets better.
The best PMs will be the ones who can understand customers and create low-fidelity prototypes or even "good enough" vibe coded solutions to customers
The best engineers will be the ones who use their fleet of subagents to work on the "correct" requirements, by understanding their customers
At the end of the day, we are using software to solve people's problems. Those who understand that, and have skills around diving in and navigating people's problems will come out ahead
This is a perfect example of an element of Q&A forums that is being lost. Another thing that I don't think we'll see as much of anymore is interaction from developers that have extensive internal knowledge on products.
An example I can think of was when Eric Lippert, a developer on the C# compiler at the time, responded to a question about a "gotcha" in the language: https://stackoverflow.com/a/8899347/10470363
Developer interaction like that is going to be completely lost.
Yuck. I don't know if it's just me, but something feels completely off about the GH issue tracker. I don't know if it's the spacing, the formatting, or what, but each time it feels like it's actively trying to shoo me away.
It's whatever the visual language equivalent of "low signal" is.
Still gh issues are better than some random discord server. The fact that forums got replaced by discord for "support" is a net loss for humanity, as discord is not searchable (to my knowledge). So instead of a forum where someone asks a question and you get n answers, you have to visit the discord, and talk to the discord people, and join a wave channel first, hope the people are there, hope the person that knows is online, and so on.
Yeah, I suspect that a lot of the decline represented in the OP's graph (starting around early 2020) is actually discord and that LLMs weren't much of a factor until ChatGPT 3.5 which launched in 2022.
LLMs have definitely accelerated Stackoverflow's demise though. No question about that. Also makes me wonder if discord has a licensing deal with any of the large LLM players. If they don't then I can't imagine that will last for long. It will eventually just become too lucrative for them to say no if it hasn't already.
Discord isn’t just used for tech support forums and discussions. There are loads of completely private communities on there. Discord opening up API access for LLM vendors to train on people’s private conversations is a gross violation of privacy. That would not go down well.
I think most relevant data that provides best answers lives in GitHub. Sometimes in code, sometimes in issues or discussions. Many libs have their docs there as well. But the information is scattered and not easy to find, and often you need multiple sources to come up with a solution to some problem.
I agree that there will be some degradation here, but I also think that the developers inclined to do this kind of outreach will still find ways to do it.
I believe the community has seen the benefit of forums like SO and we won’t let the idea go stale. I also believe the current state of SO is not sustainable with the old guard flagging any question and response you post there. The idea can/should/might be re-invented in an LLM context and we’re one good interface away from getting there. That’s at least my hope.
I used to look at all TensorFlow questions when I was on the TensorFlow team (https://stackoverflow.com/tags/tensorflow/info). Unclear where people go to interact with their users now....Reddit? But the tone on Reddit is kind of negative/complainy
> you see already today that self-driving produces already much less accidents (90% or so I read days ago?)
This is like saying skydiving is safer than driving because you're looking exclusively at accidents and ignoring the total number of events. There are way more cars than there are skydivers. Now in the realm of autonomous vehicles: there are way more manually driven vehicles than there are autonomous ones, so of course there's going to be significantly more accidents from manually driven vehicles.
As someone in this exact position, I swapped to CachyOS this week and have been extremely impressed. I've yet to come across a game that didn't function extremely well (I have not tested bf6 yet, which I suspect the anti cheat may fail on).
Really that's the problem - Anticheat. Sure, at this point most games work on Linux. The problem is, most people don't play most games. Most people play a handful of games, and where the players go, the cheaters follow. In response, the game studios deploy more and more aggressive anticheat measures, ultimately breaking the tiny minority of people who would've otherwise been able to play the game on Linux/Proton.
Take a look at https://areweanticheatyet.com at some of the biggest games on the planet, and how most of them don't support Linux or Proton.
Windows virtual machines are much slower than bare hardware, especially for things GP mentioned like games. I have also found Linux lags behind in many areas that matter to me in functionality, performance (even compared to Windows 11) and general ease of use. Lots of people Linux or Chrome OS is sufficient for them, and that's great, but it's not enough for everybody.
In response to your initial question, I believe everything must be criticized, especially things we like. Internal criticism, such as criticism of Windows, is just as important as external competitors, such as Linux.
I'm confused by your statement. I'm a serious gamer and I've been on Linux for years. It's there, and it's been there for a while.
I feel you on liking things about Windows though. I'm a Windows guy by nature. I genuinely like the OS, and if Microsoft wasn't being so absurdly user-hostile I would switch back in a heartbeat.
I use NVIDIA hardware which objectively have superior maximum performance compared to AMD graphics cards. I use HDR high pixel density monitors as well. I like laptops with decent battery life and decent touch pads.
Windows simply offers a cleaner, more well put-together experience when it comes to these edge cases. I have many tiny nitpicks about how Linux behaves, and every time I go back to my Windows Enterprise install it is a breath of fresh air that my 170% scaling and HDR just work. No finagling with a million different environment variables or CLI options. If a program hasn't opted into resolution independent scaling then I just disable it, and somehow the vector elements are still scaled correctly, leaving only the raster elements blurry. Nowadays laptop touch pads feel like they are Macs, which is high praise and a sea change from where Windows touch pads were about a decade ago.
If you strip away all the AI nonsense, Windows is a genuinely decent platform for getting anything done. Seriously, MS Office blows everything else out of the water. I still go back to Word, Excel, and PowerPoint when I want to do productivity. Adobe suite, pro audio tools, Da Vinci Resolve, etc, they just... work. If you haven't programmed in Visual Studio or used WinDbg then you have not used a serious, high-end debugger. GDB and perf are not even in the same league.
As a Windows power user, I want to go back to the Windows 2000 GUI shell, but with all the modernity of Windows 11's kernel and user-space libraries and drivers. I wish Enterprise was the default release, not the annoying Home versions. And I really, really wish Windows was open-sourced. Not just the kernel, but the user mode as well, because the user mode is where a lot of the juice is, and is what makes Windows Windows.
I have a computer running fedora with a 5080 GPU. The newer games all work fine (e.g. cyberpunk 2077, BG3). The really old games mostly work fine (e.g. system shock 2, original Deus Ex) . The in-between games... are really hit-or-miss (e.g. I can't get witcher 1/2 or Deus Ex Human Revolution to run).
I have played through Deus Ex HR (and MD) on Bazzite albeit on an AMD GPU. You should generally be able to do it with Proton (GE recommended) either Steam (recommended) or Heroic (if you bought them from GOG).
Witcher 1/2 at least also worked OOB via steam.
For some context/ user comments, see Deus Ex HR[0] and System Shock 2[1] on protondb.
Not OP, but as I'm sure you already know, there is a small but significant minority of games that don't play nice on Linux. Generally these are triple-A games that either have very competitive MP scenes, and/or which are thinly disguised casinos (e.g. Madden).
There's also more friction with gaming on Linux the moment you step off the beaten path (i.e. Steam). Yes, yes, Lutris etc, but you're still going to run into things that refuse to play ball from time to time. You can generally solve these, but it's friction you don't get on Windows, and that you might not be in the mood for when you want to play a game.
I've been a gamer on Linux for years too. I'd say it's ~80% there. (95% if you don't play competitive triple-As and stick to Steam.) It drops dramatically though if you want to play oddball 90s and 00s games, or use modding tools, etc.
Personally, I've been toying with the idea of putting Windows on my gaming PC again, after many years. It's not my daily driver, so I'm not too fussed what runs the actual games. My time is limited and valuable to me, and I do not want to spend it nailing down cryptic Proton incantations (admittedly rare, but not yet rare enough). I love tinkering, but that's not tinkering, that's a chore.
Something I just realized now with a lot of promotional things for AI: it's almost always "planning a trip" or some other not so frequent thing I'll need to do. In terms of my day-to-day non-technical usage, I haven't seen a super compelling demo for any AI tools past the standard chat interface yet.
My question for you is: how do you not learn a language by having an LLM aid in writing it for you?
What I've found now is LLMs allow me to use/learn new languages, frameworks, and other technologies while being significantly more productive than before. On the flip side, as others have mentioned, I shoot myself in the foot more often.
Basically, I output more, I see more pitfalls upfront, and I get proficient sooner. This can be the case for you if you take on an active development approach rather than a blind/passive one where you just blindly accept whatever a model outputs.
The biggest thing to be aware of honestly is the difference between .NET Framework and .NET Core.
Essentially, .NET Core has dropped the "Core" moniker. It's just .NET (I tend to refer to it as .NET proper when differentiating between Framework). This change began with .NET 5.0.
If you're starting from scratch, you will want to build everything using .NET now. Do not use .NET Framework unless absolutely necessary. Probably the most common reason you'd be stuck using .NET Framework is because your project is reliant on some driver or library that isn't making the transition to .NET proper. If you're in this situation, you should try to move as much stuff as you can to be on .NET proper and then use .NET Standard, a middle man library Microsoft developed for this exact situation, to allow for cohesion between .NET Framework and .NET proper.
Otherwise, in terms of specific "modern" coding practices? I'd say to pay attention to modern OOP approaches in general. Dependency Injection seems to be used everywhere now. Also, be careful using Reflection. People get a lot done with it really fast, but if it starts being used for more than just attribute inspection and you're doing some crazy shit with it, that's usually a sign of a bigger architectural issue.
Lastly, review LINQ if it's been a while for you. Specifically, understand deferred execution as a lot of people shoot themselves in the foot by tacking on a `.ToList()` and enumerating over a massive collection they shouldn't be enumerating over.
C# has changed a lot, but it's more subtle than the way a lot of other languages have progressed. Mostly just new features that you can survive very easily without. C# 14 is bringing some interesting changes though, but that's still pretty new, so I'd see the release notes on that if you're interested.
The company I've been at for awhile only uses .NET Framework. For all current projects, and anything new. It's frustrating, especially when I first joined. Could barely find documentation or tutorials regarding this tech stack.
The lead indeed rewrote the codebase to .NET Core, but had trouble publishing changes to the server (his current method of updating views, for example, is to copy some files from his local version and overwrite the files on the server). I guess he couldn't do this with .NET Code, so he just reverted back to .NET Framework. To me, it seemed... odd. Surely there's a way to publish changes in easy manner?
I really hope he is trolling, but there are genuinely people that are becoming embodiments of AI model output. There's a professor at my university that is notorious for responding to emails with ChatGPT (without reading its output first) or generating assignments with ChatGPT (also without reading its output first).
Have you simply just seen it build a pretty CRUD application that you feel like would take you many hours to do on your own? If that's the case, then you really shouldn't be too worried about that. Being able to build a CRUD application is not what you're getting paid for.
If you've seen something that involves genuinely emulating the essence of a software engineer (i.e., knowing what users want and need), then I'd ask you to show me that example.
reply