Like many other commenters (of a certain age?), I too have this unsatisfied feeling about a particular kind of modern software development. The kind where you never really dig down and design anything, you just plumb a bunch of stuff together with best practices you find on stack overflow.
Many commenters are attributing this problem to the modern high-level tools we now have access to. But I don't think this is the crux of the issue. You can face the same issue (you're plumbing things, not a designing a system) whether you are working with low level or high level components.
Heck, you could be working on a hardware circuit, but if the only thing you had to do was make sure the right wires, resistors, capacitors, etc. were in place between the chips you're still just doing plumbing work.
To me, one of the most satisfying things about programming is when you can build something great by starting with a concept for your lower-level primitives, your tools, and then work up through the higher levels of design, ultimately having the pieces you designed fit together to form something useful to the world.
This building-things-to-build-things idea is even satisfying in other areas. Just gluing a bunch of wood together to make a piece of furniture is fine, but building your own jigs and tools to be able to do the kind of cuts that enable the end design you envision is way more satisfying, and opens up the design space considerably.
If I had to lament anything (and perhaps this is what's most in alignment with the post) it's that most of the high-level primitives you touch these days tend to be sprawling, buggy, not focused, and just generally not of high quality or performance. It's possible for high-level primitives to avoid these pitfalls (e.g. Sqlite, the canonical example) but it does tend to be the exception.
I think there is still plenty of interesting and satisfying software engineering work to be done when starting with high-level libraries and tools. You just need to think about how to use their properties and guarantees (along with maybe some stuff you build yourself!) to enable the design of something more than just the (naively-plumbed) sum of the parts.
Challenge yourself to use stdlib only for a while, or the future.
It sounds unrealistic and I'm going to get flamed, but hear me out. It works. Most of my development these days is in reasonably complete languages like go, rust, zig, and various scripting languages, so your mileage may vary if you're writing in something like rune, hare or carbon that is still taking shape.
If you think I'm crazy but have a lingering skepticism, challenge yourself to spend one day, one week, or one month using the stdlib only. If that is unrealistic in your setting a compromise could be to only use libraries that were created by yourself. You wont come out of it as an enlightened samurai monk with a celebrity HN presence, but you will gain an immense sense of scrutiny that didn't exist before as you take a look at the libraries you want to use after that.
For those who think this is just 100% bull, consider how we survived before google, youtube, and stackexchange existed.
I'm not gonna flame you, but I will note that, as someone who gets paid to use my judgement to decide on the optimal trade-off between quality, time spent on the project, and its future maintainability... I feel like all three will suffer quite a bit with this self imposed "handicap".
This is the main crux of the issue IMO: feature output velocity. With the enforcement of sprint-scale development scope, you really don't have time to iterate on a wide-reaching and supportive base layer of software infrastructure so you reach for tools that will get you what you need within the timeframe demanded by whoever hired you.
As someone who's been fortunate enough to work on and lead these kinds of projects (and watch coworkers work on them) I've come to a near opposite conclusion which is that sprints reveal how bad the ROI on this work tends to be.
The way many of these projects go is someone very smart works with subject matter experts to map out the problem space. The smart person (or people) then go and begin building this set of primitives and integrating the product, adjusting as they go along accruing some warts.
After 6-12 months we have this beautiful tool that improves developer velocity, new features are easy to code; then disaster strikes. It turns out the map of the problem space was wrong! A bunch of things the team believed to be invariants aren't! Suddenly business needs are forcing developers to tear down walls in their beautiful abstraction castle until all that's left is a tangled maze no other developer has a hope of understanding.
Now the project is a millstone around the dev team's neck rather than velocity boost they'd hoped for.
The way these projects more often succeed is some senior engineer pastes together an abstraction layer sends it out into the world. It gets heavily abused for years until finally the team says "We know this sucks, and there's a lot of business value if we make it nice, let's invest a bunch of time in retrofitting this" and fight like hell to make the business case. IMO this tends to lead to better projects and value (though unfortunately many companies make the fight harder than it should be)
Fiction and non-fiction writers have been struggling with these issues long before software was a concept. Their solution: "writing is re-writing".
I didn't mean to imply a dichotomy. I don't think an intense planning phase up front solves the problem. But feature-sprints tend to crowd out refactor-sprints because of the demands from above for new features. Downtime working on backend stuff is not perceived by the customer as anything benefiting them. But here, the backend is a codebase no one enjoys working with and the customer is the C-suite expecting X features this month because X or X-1 features were pushed last month.
Ultimately, "It depends" and no answer will satisfy all cases.
My general advice is that software engineering has a lot of well worn patterns for problems, stick to those as much as possible. Their great advantage is that any experienced software engineer will recognize them and onboard quickly, allowing you to focus on those parts of the problem specific to your project/company.
In most cases, whatever common pattern you shoehorn your problem into will suffice for the purposes of the business. It will be ugly, and have warts, but will be generally maintainable and not often touched. Again, if this turns out to be wrong after it's been battle tested and shown value, you can begin to migrate away to something new.
There are exceptions to the above, and many companies don't actually go through the motions of following well-worn patterns even when they think they do e.g. many companies with public APIs make a common set of mistakes we've known how to solve for over a decade and have known they're easy to solve if you deal with them up front.
I am thinking very hard about the CAP theorem right now while working on a billing system for a cloud API right now and it is an absolute joy. No, it won't deploy in version 1, 2, or 3, but it might in version 4, and if it does, it will be glorious.
You can find cool technical problems anywhere as long as you are willing to take the path less traveled.
> You can find cool technical problems anywhere as long as you are willing to take the path less traveled.
After doing that a few times, I'm no longer sure if the reward of tackling cool problems to create more robust, better, faster components is worth the stress of missing deadlines.
Looking back on what value of better work materializes and what is, per YAGNI, usually wasted, I just had a thought: perhaps the right way is to take the easy/dumb way and focus all available time/effort to optimize it for performance - instead of abstraction and extensibility. Because in my experience, nobody ever extends the code the way you envisioned - if they do it at all, they do it by first refactoring it to suit their own idea. And, nobody ever goes back to fix performance. Therefore, making things abstract and extensible is mostly a wasted work - but making things fast pays back for as long as the code is in use.
I am doing this for a particular purpose, though, that no billing system I have seen has. I hate metered API billing, and I don't want to use it. In particular, I don't want a customer running up a $10,000 bill and calling me for a refund. It probably will cost me O($1,000) to give them a refund, between the processing fees and the lost compute, and I will be out that money. Most companies that would ask for a refund also won't pay the bill when it arrives (which is, I think, why AWS is so liberal with refunds).
Instead, I want to do credit-based billing: you buy credits, and when you get to 0 credits, you are cut off (with an auto-refill option for the "metered billing experience," but with strict spending limits). This is, in my opinion, a much better UX than metered billing. From a distributed systems perspective, it's isometric to "metered billing with a hard spending cap" which may be the ultimate version.
The problem with credit-based billing (and why nobody does it) is that if you have a service in multiple datacenters, you have to consistently update a database to make sure that you don't drop below 0 credits, and that is very slow. However, by fiddling with the CAP theorem the way CockroachDB/Spanner do, I think we can do credit-based billing and make it feel like the AP system that metered billing is, and behave like a CP system only when we absolutely need to.
Also, this is basically all enabled by the fact that AWS has a precise time service.
In theory, version 1 of the API will be only in one region, version 2 will have active/passive redundancy, and around version 3 or 4 I want to switch to active/active in several DCs to give low latency and high reliability.
> I hate metered API billing, and I don't want to use it.
> Instead, I want to do credit-based billing: you buy credits, and when you get to 0 credits, you are cut off (with an auto-refill option for the "metered billing experience," but with strict spending limits). This is, in my opinion, a much better UX than metered billing.
How do your customers feel about that? Have you researched if your customers are comfortable spending their money upfront to buy credits they might only use much later (if ever)? For small amounts of money that might not be a big deal, but if we're talking about thousands of dollars that might look different.
Going one step further, challenge yourself not to use raw language constructs, but dive deep into assembly.
Need a conditional? Time to jump around.
I wrote a web server that scales to over 1 million requests per second, and I found out it's much more maintainable, scalable and environmentally friendly for our company.
That sounds like it was a lot of fun but I'm wondering what assembly gave you that was not available in e.g. C? My understanding is that most compilers can out-optimise the average developer, so are you an above-average developer (well I guess you are) or did assembly enable you to do something that was difficult in a higher level language? Or was it more about the challenge (which I'm totally on board with by the way)?
> My understanding is that most compilers can out-optimise the average developer
Do you happen to know where (what source) you got that from? I'm genuinely curious, as to my knowledge, compilers are generally still easily fooled by things that causes them unable to vectorize code or factor out conditional jumps.
To give an example here, see Mike Actons' talk below (from about 43:10 onwards) in which he describes the compiler failures.
No specific source I could point to really, just that I've been hanging around message boards like this for a long time. I didn't mean to say it's impossible to beat a compiler or they don't have any blind spots, but thanks for the link - sounds interesting :-)
Challenge yourself to not even use stdlib once in awhile. There are some interesting insights to glean about how much room for improvement we have even at the very bottom.
https://youtu.be/BrBb0mqoIAc
I wrote entire systems with Turbo Pascal, and then Delphi out of the box. Many others did the same with Visual Basic 6, or the Microsoft Office 2000 suite with VBA, before the .NET infection took hold and Microsoft lost it's mind.
A side effect of this is that almost all job position advertisement are disgusting to look at. They are all about this kind of mindless glue code programming, but wrapped in marketing speak to make it look like "you get to use awesome bleeding edge latest technologies" when in reality it is "you have to figure out how to configure 10 different things to work together to sort of kind of produce the intended behavior".
In the last 3 years I think I never saw even once a job description on any popular job board where they advertise that you will do some actually interesting programming. The only ones I've seen have been on Twitter but from companies doing things in areas I have no experience in (e.g. game engine programming).
I suspect that this is why leetcode tests are so prevalent.
They basically test for distance from school, and not much else, as the algorithms aren’t really reflective of real-world work, which, as the article states, is really fairly simple “glue,” binding together prefab sections.
If someone is good at, and energized by, writing “from scratch,” and "learning the whole system," then they are actually not what you want. You want people that are good at rote, can learn fairly shallow APIs quickly, and are incurious as to why things work.
I have exactly the same problem.. got sucked into "the cloud" 4-5 years ago at my current employer. Now I desperately want to get another job. Something with preferably no or minimal cloud involved. The trouble is that the jobs that sound interesting don't reflect my expertise..
Now should I try to start from 0 with a junior salary? Does not make sense with a family.
I don't really have an idea yet.. but I urgently need to change something because my current work is killing everything I ever felt for software development.
Or, just don't work in webapps. Get into embedded programming. Or join a games studio.
I have a friend who is writing code to run on a sort of exoskeleton meant to benefit disabled people and help them walk. He has never in his life "deployed to the cloud" and wouldn't have the foggiest idea of how to do it.
You know that all GPS transmitters are in space, right? GPS is a unidirectional technology where GPS receivers don't (and can't) talk back to GPS satellites in any way.
If the caretaker wants to get the coordinates back, then you have to go through a server at some point. I think GP was thinking along the lines of something like Find My iPhone, where the GPS coordinates are sent to the cloud. You will need a mobile baseband radio alongside the GPS receiver.
Yeah, you are right of course. I was thinking of a GPS receiver with a mobile connection to send data to a central server, and it turned into ‘GPS transmitter’ :)
That sounds nice but how do you get into embedded if your experience are 5 years, 10 years or more in say distributed systems/cloud/web programming/etc?
Easy - just apply, and mention that you have experience, but also that you have a reference from this site. This site has the greatest minds in terms of embedded experience, and any company worth working for will instantly know to give your application a closer look.
Yes. Whenever I work on a "serverless" app, I spend more time messing around with IAC tools like Terraform than I do writing actual application code. It's sad.
>Heck, you could be working on a hardware circuit, but if the only thing you had to do was make sure the right wires, resistors, capacitors, etc. were in place between the chips you're still just doing plumbing work.
A lot of modern hardware design feels like that, take a microcontroller, some peripheral chips, connect them together, and copy the datasheets for whatever support passives they need.
I came to this same conclusion last week when I started writing my own webgpu renderer. I went into it with no knowledge of graphics and without using libraries. Having to create my own generic abstractions for pipelines, passes and buffers has been a massive creative and educational experience.
I haven't felt this satisfaction from programming in years from my day job.
Many commenters are attributing this problem to the modern high-level tools we now have access to. But I don't think this is the crux of the issue. You can face the same issue (you're plumbing things, not a designing a system) whether you are working with low level or high level components.
Heck, you could be working on a hardware circuit, but if the only thing you had to do was make sure the right wires, resistors, capacitors, etc. were in place between the chips you're still just doing plumbing work.
To me, one of the most satisfying things about programming is when you can build something great by starting with a concept for your lower-level primitives, your tools, and then work up through the higher levels of design, ultimately having the pieces you designed fit together to form something useful to the world.
This building-things-to-build-things idea is even satisfying in other areas. Just gluing a bunch of wood together to make a piece of furniture is fine, but building your own jigs and tools to be able to do the kind of cuts that enable the end design you envision is way more satisfying, and opens up the design space considerably.
If I had to lament anything (and perhaps this is what's most in alignment with the post) it's that most of the high-level primitives you touch these days tend to be sprawling, buggy, not focused, and just generally not of high quality or performance. It's possible for high-level primitives to avoid these pitfalls (e.g. Sqlite, the canonical example) but it does tend to be the exception.
I think there is still plenty of interesting and satisfying software engineering work to be done when starting with high-level libraries and tools. You just need to think about how to use their properties and guarantees (along with maybe some stuff you build yourself!) to enable the design of something more than just the (naively-plumbed) sum of the parts.