I've been working in the tech industry in the US for about 5 years. Ever since I knew myself I've been coding. From middle school to high school, given any problem, like Sudoku, or keeping up daily chores, my solution was Programming! Programming wasy homebase. Then I studied it in uni, thought I was kinda good at it, and loved it.
But when I started working in the industry, I realized that it's absolutely exhausting. Hype after hype, fad after fad, modern after modern, refactor after refactor. I have a workflow, I know how to build apps. Then one day director of Ops comes and completely and utterly changes the workflow. Ok fine, I'm young, will learn this. Month passes, it is now Terraform. Ok fine I'm young, will learn this. Now we're serverless. Ok fine, will learn. Now everything is container. Ok. Now everything microservice. K. Now turns out lambdas aren't good, so everything is ECS. OK will rewrite everything...
Look I'm not even complaining. But it feels like I'm stuck in a Franz Kafka novel. We just keep changing and changing the same things again and again because that's the new way to do. Big distraction. Destroys your workflow. Forget about all the util scripts you wrote last 6 months being useless.
I don't even know how I would do it. Maybe I would do this the same way if I had any power. But that doesn't change the fact that it's a bit ridiculous. Fun but tiring. Entertaining but exhausting. Cute but frustrating.
To give another perspective, at my first job deploying meant sshing into production and execute a make script. It worked 98% of the time, and you'd be having fun the 2% else. It sure wasn't the state of the art, but wasn't something people would yell at you for.
We needed windows machine because of corporate policy and remoted into a Solaris box to dev, as a vague replica of production.
Fast forward to today, and we can run a production replica on our local machines at home, the CPU architectures aren't even the same, and code will be deployed through the CI automatically after the tests all pass.
Sure not all changes are meaningful, but in the grand scheme of things the field is moving forward in spades. I certainly wouldn't want to go back a few decades.
A decade ago you could deploy an internal PHP app to a box and forget about it for a few years, then fix a minor bug by sshing in, viming a file, and hacking it a few times.
These days, if you leave the app untouched for six months that 10 minute fix becomes 2 hours of fixing a broken docker build. :)
In the meantime, your PHP app had 40 major security flaws, no meaningful monitoring, a DB that wasn't backed up and major data consistency problems. Also, when your box's hdd crashed, you lost all those minor changes you had vim'ed over the years. But for the rest, yeah, all is fine.
> In the meantime, your PHP app had 40 major security flaws, no meaningful monitoring,
How does Kubernetes, microservices and front end frameworks fix that?
They don't. In fact, as someone who works at the more modern end of these things, I'd suspect that "no meaningful monitoring" probably correlates quite well with cloud era microservices thingies.
But we can't blame the tools, that space is just younger and by nature contains more immature stuff.
I don't think the argument here is for keeping things out of version control and monitoring. It's that you can actually leave stuff alone for several years, come back and find it working just as before, when the layers underneath doesn't constantly change. That can be liberating.
* k8s helps because you keep your application online during rolling updates. You keep your changes relatively small and can test your changes in isolation.
* microservices help security because for each service you write, you create a minimal surface area (as opposed to monolithic applications where each method has access to maximum dependencies).
* microservices also help monitoring, because failures can be contained and one application does (should) not impact another. This means that your focus of monitoring is on application performance, and less on the underlying system architecture.
* monitoring can be a cross-cutting concern. For example, you can monitor all HTTP requests by instrumenting your ingress layer. Similarly, you can support a unified authorisation scheme by bringing this to the ingress layer. Even more, using microservices, you can independently promote services to production (sort of like feature toggles) or do green/blue promotion.
* security of your microservices is improved, because you can independently upgrade your dependencies per service. Many older monoliths are 'stuck' in a certain dependency configuration and cannot be upgraded without a 'big bang' of expensive dependency resolution and testing.
I could give many more examples, but I hope this captures the spirit a little.
As a "not a cloud guy" who only recently got heavily exposed to it.. I found the state of the art / CloudOps attitudes towards monitoring astonishingly Stone Age.
Yeah observability is great, but no one I've seen in cloud manages to launch with it. So in place of SRE org with full observability stack, they launch with effectively nothing.
Strong attitudes of "we don't measure hardware metrics like cpu/memory/disk, we measure customer facing stuff like throughput/error rates etc". Sounds good, but what do you think happens to those if you run out of disk bro? Pretty sure the running out of disk happens first. Imagine if you could like.. catch it?
CPU/Memory: scale horizontally when needed. Monitor cost.
Disk: essentially limitless. If disk of VM runs out, node will crash, new node will start. Service should keep running on other nodes meanwhile. If restarts happen too often, throughput/error rate will suffer or cost will rise.
Modern mentality is "let it crash" [0]. Software has bugs. Design so it can crash and scale depending on need.
Letting an Erlang process crash is letting a process holding a small collection of resources, maybe a single TCP connection and several kilobytes of local state. It does not necessarily scale beyond that. Pretty much by definition, if you've got something that can run out of disk, when it runs out of disk and you nuke it, you're taking out a lot more than a single connection and a few kilobytes of state.
And "let it crash" is hardly "My service is invincible!" I'm sure any serious Erlang deployment has seen what I've seen as well, which is a sufficiently buggy process that crashes often enough that the system never gets into a stable state of handling requests. "Let it crash" is hardly license to write crap code and let the supervisor pick up the pieces. It's a great backstop, it's a shitty foundation.
I built a distributed system once on two principles: 1. let it crash, 2. make everything deterministic. obviously, this resulted in crashes being invisible and transient (good) or an infinite crash loop (bad.)
I haven't used Erlang, but my impression is that it's probably the same experience there?
The way it builds on immutability means it naturally can lean in that direction, but the way it tends to be used for networks a lot undoes that because network communication by its nature is not deterministic in the sense you mean.
In my case, IIRC, it was something to the effect of, a lot of our old clients out in the field connected with a field that crashed the handler. Enough of these were connecting that the supervisor was flipping out and restarting things (even working things) because it thought there was a systemic issue. (I mean, it was right, though I had it configured to be a bit to aggressive in its response.) The fact that I could let things crash didn't rescue the fact my system, even if I fixed that config, would strain its resources constantly doing TLS negotiations and then immediately crashing out what were supposed to be long term connections.
Obviously, the problem was in the test suite, we were able to roll back, address the problem, ultimately this was a blip not a catastrophe. I just cite it as a case where "let it crash" didn't save me, because the thing that was crashing was just too broken. You still have to write good code in Erlang. It may survive not having "great" code but it's not a license to not care.
Using Taleb's nomenclature, let it crash is not anti-fragile at run time. Erlang does not get progressively better at holding your code together the longer it crashes. It is only resilient and robust. Which is ahead of a lot of environments, but that's all.
Many software development processes considered as a whole are anti-fragile... mine certainly is, which is a great deal of why I love automated testing so much (I often phrase it as "it gives me monotonic forward progress" but "it gives me antifragility" is a reasonably close spin on it too)... but that's not unique to Erlang nor does Erlang have anything special to help with it as everything Erlang has for robustness is focused at run time. You can have anti-fragile development processes with any language. (The fact that I successfully left Erlang for another language entirely is also testament to that. I had to replace Erlang's robustness but I didn't have to replace its anti-fragility, since it didn't have it particularly.)
Anti fragility is just a fancy name for 'ability to learn'. Erlang error handling philosophy enables learning by keeping thing simple and transparent. It's easy to see some component keeps failing, it doesn't bring your whole app down and you can look into it and improve it. Adding tonnes of third party machinery may be robust or even resilient, but if it makes things more opaque or demands bigger teams of deeper specialists, it precludes learning. Thus is not 'anti fragile'. You can keep your ability to learn healthy without Erlang, and you can use it without learning much over time.
This is an adequate philosophy for like.. a CRUD app, some freemium SaaS, social media, etc. Stuff with millions of users and billions of sessions, etc.
However there are industries applying these lessons in HPC / data analytics / things that touch money live .. operating on scales of users in the 10s to maybe 100s. So stuff where downtime is far more costly both in dollars and reputation.
I'm also intrigued by the constant cloud refrain of "stuff crashes all the time so just expect it to" coming from a background where I have apps that run without crash for 6 months at a time, or essentially until the next release.
I'm all for scaling, recovery, etc.. I just fail to understand why it is desirable for this to be an OR rather than an AND.
What if stuff was highly recoverable and scalable but also.. we just didn't run out of disk needlessly?
> I'm also intrigued by the constant cloud refrain of "stuff crashes all the time so just expect it to" coming from a background where I have apps that run without crash for 6 months at a time, or essentially until the next release.
IMHO, those aren't mutually exclusive. Your app code should be robust enough to run 6+ months at at time, and the "stuff crashes all the time so just expect it to" attitude should be reserved for stuff outside your control, like hardware failures.
Right, which is why I think brushing aside actually monitoring basic hardware stats that are leading indicators of error rates / API issues / etc makes no sense.
How is that better than a simple monitor/alert for low disk space? That low disk space is likely caused by having an application store too much cumulative data in log files or temporary caches etc. and often easy enough to fix. And many applications out there simply don't need the level of scalability and extra-robustness you need that you can still expect decent levels of service in the immediate aftermath of having one node go down. Certainly from my experience it's less work (and cost) to put measures in place to minimise the chances of a fatal crash than it is to ensure the whole environment functions smoothly even if parts of it do crash regularly. I'd also note we can be grateful that the developers of OSes, web servers, VMs and database servers don't subscribe to "let it crash"!
It looks like you misunderstood the article. "Let it crash" in the BEAM VM world pertains to a single green thread / fiber (confusingly called "process" in Erlang).
It pertains to e.g. single database connection, single HTTP request etc. If something crashes there your APM reports it and Erlang's unique runtime continues unfazed. It's a property of the BEAM VM that no other runtime possesses.
"Let it crash" is in fact "break your app's runtime logic to many small pieces each of which is independent and an error in any single one does not impact the others".
Scaling an Erlang node is very rarely the solution unless you literally run out of system resources.
I understood the article just fine. The "Let It Crash" philosophy is scale invariant.
Please read the last three paragraphs in [0]: "a well-designed application is a hierarchy of supervisor and worker processes" and "It handles what makes sense, and allows itself to terminate with an error ("crash") in other cases."
I've personally designed and co-implemented mission critical real-time logistics systems which dealt with tens of thousands of events per second, with hundreds of different microservices deployed on a cluster of 14 heavy nodes. Highly complex logic. At first we were baby-sitting the system, until it became resilient by itself. Stuff crashed all the time. Functional logic was still intact. Then we had true silence for months on our pager alerts.
I call it anti-fragile and Taleb is right. You can't make a system resilient if you don't allow it to fail.
Is that so different to how Java+Tomcat or .NET+IIS work?
A crash handling one request generally can't/doesn't affect the ability to handle other requests. Unfortunately it does often mean you have limited control over how the end-user perceives that one failed request.
It is only the same when you observe the visible results and your APM, and nowhere else. The stacks you mention -- and many others -- engage much more system resources per request compared to the BEAM VM. I have personally achieved 5000 req/s on a 160 EUR refurbished laptop with a Celeron J CPU, a pretty anemic one you know, in my local network -- by bombarding an Elixir/Phoenix web app; Elixir steps on Erlang if you did not know -- and that's without even trying to use cache.
RE: error handling, Elixir apps I coded and maintained never had a problem. Everything was 100% transparent which is another huge plus.
In general CGI and PHP had the right idea from the start but the actual implementations were (maybe still are? no idea) subpar.
Erlang's runtime is of course nowhere near what you will get with Rust and very careful programming with the tokio async runtime, but it's the best out there in the land of the dynamic languages. I've tried 10+ languages which is of course not technically exhaustive but I was finally satisfied with making super parallel _and_ resilient workloads when I tried Elixir.
For a lot of tasks I can just code in the Elixir REPL and crank out a solution in an hour, including copy-pasting from the REPL and into an actual program with tests. Same task in Rust took me a week, just saying (though I am still not a senior in Rust; that's absolutely a factor as well) and about 3/4 of a day in Golang.
The only other small-friction no-BS language and ecosystem that came kinda close for me is Golang. I like that one a lot but you have to be very careful when you do true parallelism or you have to reach for a good number of libraries to avoid several sadly very common footguns.
And I would add -
All the GOOD cloud people I know are GOOD linux people. It is a prerequisite.
However theres lots of cloud people who don't know linux.
Therein lies the challenge.
Too many of these cloud devs forget that whether its "server less" or not, theres a .. server, somewhere.
All the layers of abstraction work when they work, and when they don't .. best of luck figuring out what's going on in a timely fashion. Hope you aren't running any money on top of it.
If you forget about a PHP container for a few years it will /also/ have 40 new vulnerabilities. Actually, containers are worse because OS updates of core shared libraries do nothing. You have to rebuild every damn container.
Setting up monitoring for your docker containers is also a whole thing. :)
I think you’re taking my example a little too literally. My point is not that docker/k8s/whatever is bad; just that the ‘new’ adds features at the cost of simplicity.
I can say with great certainty: Almost no one rebuilds their damn container anywhere near as often as the the gray-beard in the basement updates the Debian packages on the the server that runs the container.
I had the debate with a client, we did monthly security update, unless something horrible happened. The client was rather upset that we didn't patch more frequently, like weekly or daily. My argumentation is that it doesn't really matter if the Linux kernel or bash is patched, when the only thing running is a container with a beta version of Tomcat that hasn't had security updates applied in three years.
Even worse are the people who just pull things from Docker Hub, with no plan as to how and when they'll pull newer versions. But fine, let's just keep running KeyCloak from 2017, and that old Postgresql image which the developer never configured to do backups, I'm sure it's fine.
> I can say with great certainty: Almost no one rebuilds their damn container anywhere near as often as the the gray-beard in the basement updates the Debian packages on the the server that runs the container.
Most probably the gray-beard simply enabled "unattended-upgrades".
You can do something similar with a container (track the security fixes for the packages used, force rebuild and deploy when needed), but it is a bit of work and I don't know of any ready solution.
I think what people are missing is that sure, code "rots", at the very least because of security patches. But since this all happens to everything simultaneously, the more distinct layers and support tools you have in your stack, the more often you have to deal with something breaking in a nontrivial way.
In short: the more moving parts you have, the less time you have between major malfunctions.
(Manufacturing and hardware world understands it well, which is a big part in why they like integrating things so much.)
So e.g. over in the backend-land where I live, it used to be that I had to occasionally update the compiler or one of the few third-party dependencies that I used. Today, I have many more libraries (to the point there's something to update for security reasons roughly once a month, on average), and on top of that, I have CI/CD introducing its own mess, Conan updates which occasionally get messed up, or make some existing recipes incompatible, CMake updates which are done unexpectedly and break stuff, now also Docker is adding more of its own problems, etc. So I have to deal with some kind of tooling breakage every other week now.
And always, always, when I think it's all finally OK and I can get on with my actual job, some forgotten or hidden component craps itself out of the blue. Like that time our git precommit hooks broke for me, because someone changed them in a way that doesn't work with my setup. And then me wasting a day on trying to fix it, eventually giving up and degrading my setup to unblock myself. Or another day where, for no apparent reason, some automation that made automated commits to some git repos started losing Change-ID headers in commit messages, making Gerrit very sad, leading to several people wasting a total of several person-days trying to fix it. Etc.
There's always something breaking, the frequency of such breakages seems to be increasing, and it's a major source of frustration for me in this job. Which is why I too am often thinking back to "good old days", and am increasingly in favor of keeping the amount of dependencies - both libraries and tooling - to a minimum.
Right, and it's easy to think about all the winnings when things go right - it's harder to think about increased frequency and cost of failure due to increase in complexity and number of independently moving parts.
The biggest thing with containers is, these 40 vulnerabilities won't matter as much if they're about erasing your directories or killing your machine. It will get rebooted in a pristine state and an attacker would need to stick there killing it at every reboot to have lasting effect.
Which is also the point of monitoring a container, which is fairly reliable nowadays, compared to managing your own health check service to ping your box to check if it's alive.
All in all, I think k8s is way too complicated for what it does for most people, but administrating servers was also a complex task to begin with, and the things sys admins were dealing with looked nightmarish to me. Heck, there was a time companies would have their own SMTP in house...
Layering more abstractions on top of what was already complicated doesn’t fix the complications below, it just hides them. You then have to hope that there’s not a bug somewhere in between those layers. Additionally, each new layer has a cost in performance. More layers means more compute, and eventually mean more money. If you’re Netflix, it’s worth it, if you’re not… what admins dealt with before wasn’t so bad. Now that that job has largely been killed and renamed 20 times, it’s far worse. Admins don’t just have to deal with a server, they have to deal with a server and 60 containers which are just little baby servers.
> won't matter as much if they're about erasing your directories or killing your machine
Those sort of attacks have tapered off though, right? Unless you're engaged in a specific feud, or have caught the ire of a social justice warrior, these days we're mostly looking at data exfiltration as the primary goal. That being said, there are a lot of ongoing feuds and active social justice warriors.
If we're talking about common, automated attacks, I think the game plan for the past few years has been to probe for systems vulnerable to RCE, install a crypto miner, join the server to a botnet and see how long it takes its owner to notice.
now read those flaws, compare them to the worst in 2004 and tell me they're the same. They're not the same because the internet was insecure as fuck in 2004 and these days security researchers motivations to receive bounties result in considerably more situational (and significantly less severe) security issues.
Shiney new app has no monitoring, 40 security flaws, and a ultra-modern cloud database that promotes data consistency issues. But all the devs you can hire act like they might know something about this mess. They really don't.
As an admin of PHP apps at that time I can sincerely say screw you, you're lying. PHP apps breaking because new version of PHP shat on compatibility was pretty common, and weirdly enough got more common in newer versions.
You also had to keep unholy combo of php extensions (why that garbage is in app server and not app) and server config because the special little retard that designed the app decided he must do something different.
And forget about ever leaving apache, if it doesn't use some apache module to fix PHP/developer ineptitude it will at least have half of the request routing logic in .htaccess file.
Haha yeah I can say, with the same sincerity, that I get occasional calls about fixing PHP apps that I built 2007-2013. They've been running (without upgrades) on some forgotten server in a multinational corporation.
I tell them that they should let their compliance department know, that the version of PHP isn't even supported, and I list some of the thousand security fixes that they aren't using.
I usually end up VPN/FTP:ing into a forgotten era to add the fixes and I definitely don't want to go back.
I had this happen about a year and a half ago with an application I wrote in 2009. Finally got them to allow me to rewrite everything in C#, allowing me to keep up to date on security vulnerabilities and performance improvements, without having to do much but run through a series of tests when a NuGet package or .NET version upgrade is released. NuGet with current .NET is a far different beast now, than when you used .NET Framework and plenty of XML assembly binding.
>then fix a minor bug by sshing in, viming a file, and hacking it a few times.
Yeah, screw version control! Who needs any record of the change you made? If you have to move the code or redeploy it somewhere because that server goes down or something else changes, _real_ 10x rockstar ninjas remember every change they've made for the last several years.
Not to mention the paper trail needed for any meaningfully regulated industry. The thought of someone just ssh'ing into an application with access to PII or HIPAA regulated data and being able to tweak the production code is nightmare fuel.
>Fast forward to today, and we can run a production replica on our local machines at home, the CPU architectures aren't even the same, and code will be deployed through the CI automatically after the tests all pass.
And someone we squandered most of the CPU performance increases in the process, and the super-duper architecture/CI/delivery pipeline fails almost the same percentage of the time, but needs 10x the people to keep it running...
CI and delivery failing 2% of the time just means you haven't deployed and you just retry. At least you're not poking things in prod.
I'm with you on the CPU gains going down the drain and 10x more people working on it, but it's also a sign that we're talking IT operations more seriously and there's enough money to cover these kind of inefficiencies.
I am in the exact opposite,but equally frustrating boat.
In last 5 years, I have worked at 2 Faang and a large hedge fund and everywhere they had extremely outdated stack or undocumented in-house stack or outdated undocumented in-house stack. Already derecated dependencies were pinned years ago and management won't let you modernize anything since there "no business value in it" and we need to ship a feature instead and if you still insist to refactor and modernize and something goes wrong, now you are personally responsible for the outage.
The effort you spent successfully upgrading decades old stack won't count in your promo packet.
There is just no incentive to upgrade anything beyond security fixes.
Rewrite to chase the latest fad is frustrating but using technologies which don't even have online documentation anymore is equally if not more frustrating.
I was stuck maintaining a solution using an abandonware internal transport layer built on technology that died in 2004 and an abandoned ORM on an EAV schema database over SOAP backend in 2022 with much the same "no business value in it" argument. "We're trying to sell the company so profit is of the most importance" transitioned into "we bought you because you make money". So instead we increased our maintenance exposure by adding more necessary features to the wobbly pile (or 'fixing' broken ones) as opposed to resolving the mistakes made over twenty years ago.
Of course the dickhead architect would use articles like this OP as justification for why all of this wasn't a problem.
The way that I have avoided burnout for all these years is working on my own projects, and building them exactly the way I want to. This gives me the strength to grind through the shitty work I am paid to do.
Maintaining old stuff is way more sensible than rewriting everything every few years. Companies tend to measure costs of maintenance vs rewrite.
Edit: there's always a tradeoff but I wrote "every few years".
Cobol is an odd example. Many banks have been going back to Cobol because they found it less expensive than the alternatives and are there is an influx of new graduates learning the language right now.
Sensible for who? Companies may try to estimate costs of maintenance vs rewrite, but individuals need to make the final call and for them the benefits usually go the other way.
Maintaining some existing project is going to get you nothing if you do it right and will bring you problems if it breaks despite your best efforts. Rewriting an existing system in a new tech stack counts as exciting innovative work for promotion purposes, or at least prepares your CV for your next job hop. With salary bumps between levels being what they are in software, I'm not surprised people choose the more lucrative option.
Up to a point, there's always a tradeoff and a turnover point; at the moment the old Cobol mainframes are one of those; the tradeoff is that there's a lot of institutional knowledge and essential processes in that code, but on the other it's harder to find competend staff that can - and want - to work with it.
news flash: You are working in exactly the same place.. it is just 5 years later :-) All that outdated technology you mention, was probably introduced as the newest-cant-live-without at some earlier time.
On the opposite end of the spectrum, if your stack is too old (and niche enough) then you won't get to know it very well once every bit of documentation was hosted on expired domain name, and you can only find bits of online discussion about it are Internet Archives snapshots of a forum long since dead.
And also you don't get to integrate it very well if everything around it isn't maintained, and compatible on old version of a library (with said documentation not existing anymore).
Also, while resume-driven development is not a good thing either, it's certainly not a good investment to pigeonhole your career into working with something you hate working with.
But remember to use the right tool for the right job - older isn't always better. E.g. there is a set of people who use stored procedures for everything and it does make sense when you are doing data-heavy processing on database-side. But using that for everything, even when the volume is absurdly low, is a bad idea - using a language like Java would be a million time more flexible, composable, testable (unit tests, anyone?) and allow making changes with greater confidence.
Long enough for the rough edges to be softened, to show that people are willing to maintain it, for detractors to have pointed out legitimate issues, for the tech to become stable.
I’d prefer 10 years personally, but I value stability more than most. I dislike revisiting problems I’ve already solved.
It's a weird tradeoff between innovation vs reliability, investments vs cost, but also, staff retention & hiring. I think a lot of companies follow fads as a way to attract talent. It's not in their best interest to follow the fads or spend time rebuilding the same thing over and over, but otherwise they'd run out of competent staff and are left with low energy / "safe" developers.
In both of your cases, I think there is this massive disconnect in decision making. Software development is a constant game of economic decision making. Which investment returns the greatest value. Agile worked because it spread risk and allowed for reaction to new opportunities.
CICD/devops worked because it drastically reduced the risk & expense of defects, the cost of development, time to market, etc.
But the chasing of new technology fads vs lagging behind with old outdated technologies? I am seeing people make decisions without thinking about the decision.
What economic benefit does the new tech bring? Whats the economic trade off of implementing it vs other projects? How long till a return on investment? Whats the costs if we do nothing?
It isn't even that hard to do, but people just keep deciding to spend time and resources with little to no thought. It infuriates me.
You'll consistently notice obviously idiotic decisions rooted at decision makers who aren't around to own up to the mistake.
The other end has decision makers never change, they waited too long, were too risk averse to stagnate and the cost of change just keeps rising.
The ones who end up surviving this churn, have both kinds of decision makers in equal powers, mistakes are made, but things keep moving sometimes in leaps, sometimes backward steps.
Congratulations. It sounds as if you are working for a company that knows what it is doing. They are clearly focusing on customer value instead of chasing the latest shiny new but buggy and gone tomorrow “new” tech. Giving you an opportunity to focus on delivering value to customers in a stable environment. Making each delivery higher quality and less buggy than the previous.
I work in such an environment and routinely deliver million+ line applications with ZERO bugs in production to very large customers around the world. Highly satisfying.
Not my observation sadly. We've got some excellent 10+ years old tech -- Golang, Rust, Elixir -- yet a lot of companies out there holds on to C#, PHP and Java with an death grip.
And I've worked with all of these. The newer stuff is better in every way except penetration (which is held back by bias and misguided evaluations of the word "safe").
I would, when it comes to parallelism. Java being stuck at OS threads (I know about Loom but it's nowhere near the BEAM VM or goroutines) is not doing it any favors.
PHP at least attempted some isolation between requests (emphasis on attempted) even though it didn't do it very well.
All that being said, people don't like seeing their favorite tech called out and the kneejerk down arrow presses are expected. Programmers are a tribal bunch.
Personally I've found my productivity multiplied with the above-mentioned more modern tech: Elixir, Rust and Golang. But I am also aware many companies would never risk it, and that there is no shortage of people who are OK coasting on old stacks if it pays the bills. Not judging.
Because they don't see technical debt as a bad thing. Go back to the original pitch for technical debt by Ward Cunningham. He was working for a finance company and used the debt metaphor to advocate in favor of "borrowing against the future" when implementing new code. The idea was that you could use the latest stuff without reinventing everything. Like mortgage payments, the reinvention could happen over time, in increments, until it's "paid off". The problem is that nowadays, especially in finance, you're dealing with people who are used to over leveraging everything. To them, the idea is to pile on debt until it becomes unmanageable: and then sell. Or go into Chapter 11.
> Look I'm not even complaining. But it feels like I'm stuck in a Franz Kafka novel. We just keep changing and changing the same things again and again because that's the new way to do. Big distraction. Destroys your workflow. Forget about all the util scripts you wrote last 6 months being useless.
Here's how I stay sane -- I focus on the changes. For the technologies that you noted, here are some good questions that come to mind to set them in my brain:
- What's the difference between CGI scripts and Serverless?
- What's the difference between Terraform and Ansible?
- What's the difference between a Bash script and a Makefile ? Is one more general than the other? Does one have a considerably better interface than the other?
All the solutions look the same, but the distinctions between them (even at a relatively surface level) are what help to actually tell things apart and stop the feeling of endless churn.
I think of it this way -- unlike other fields because the fruits of our research are immediately usable and almost zero-cost, we're given a front seat to the frontier of experimentation. It's a water hose, and you have to determine which streams are worth turning on/off, bottling for later.
I would phrase almost the same idea from the opposite angle: focus on the fundamentals that never change, and view the trends in terms of how they relate to those fundamentals.
But also, those are some pretty odd comparisons. For sure Ansible and Terraform aren't directly comparable. If anything they're complementary. Terraform provisions machines (and also infra, etc), and Ansible configures provisioned machines.
> I would phrase almost the same idea from the opposite angle: focus on the fundamentals that never change, and view the trends in terms of how they relate to those fundamentals.
Definitely -- I'd say that these questions force the understanding of the fundamentals, because that's what you get down to ("fundamental"/irreconcilable differences).
The problem is that fundamentals can be similar, and sometimes the difference is elsewhere (UX/DX/perf for example).
> But also, those are some pretty odd comparisons.
So they are odd comparisons, in some sense -- but the point was to reflect the kind of questions someone who didn't know the answer would ask. Ansible and Terraform (and Salt/Puppet/Chef) are often presented in similar context and they're often confusing precisely because they are so close but different.
The other example (CGI vs Serverless) is situation that's somewhat more targeted towards the age discussion -- did we churn over the years for churnings' sake?
> For sure Ansible and Terraform aren't directly comparable. If anything they're complementary. Terraform provisions machines (and also infra, etc), and Ansible configures provisioned machines.
What you've said is the usual rebuttal (and it's mostly right!), but did you know that Ansible did/does provisioning?[0] The lines aren't drawn as neatly as they seem to be.
You can build a Terraform-like experience out of Ansible, if you wanted to, and it's important to know that and why you choose a tool like Terraform instead of traveling deep into Ansible land.
All that said, if you prefer, replace "Ansible" with "Pulumi" (Full Disclosure: I'm firmly on the Pulumi side in the Terraform vs Pulumi debate).
> What you've said is the usual rebuttal (and it's mostly right!), but did you know that Ansible did/does provisioning?[0] The lines aren't drawn as neatly as they seem to be.
In the same way that a toaster can be used as a hammer if you hit nails hard enough with it from the "correct" side. Ansible barely does it's main purpose, configuration management, but abusing it for provisioning is just on a different level. Even overlooking the extremely narrow feature set, just `state: absent` should be enough to convince anyone Ansible just isn't made for provisioning. Add it the bolted on templating of a configuration language that uses spaces for logic, the fun of Python dependencies of which you need a ton to do any provisioning, and you're just in for a massive world of pain for literally no reason.
Disclaimer: I work at HashiCorp, but not on or with Terraform, and have had a disdain for using YAML for anything other than basic configuration and Ansible for anything other than simple, basic, low complexity and low scale Configuration Management since I inherited an Ansible project to manage VMware configuration in a past job.
> Do you really not see any difference between Ansible and a shell script?
Surely they do, but how much those differences matter depends on where those tools sit in relation to what you're trying to achieve.
For example, I could write essays about the differences between bash and PowerShell, and about differences in productivity their various facets create, in both abstract and concrete terms. However, when my task is not about shell scripting per se, and the script plays only a minor role in the solution ("oh, but we can make the button launch a script that launches X"), and my main concern about that stage is to convince the team to use Bash and/or PWSH instead of making Python a dependency in the project - then bash and PWSH are really the same thing to me - any difference in productivity for that use case is dominated by one's familiarity with the tool, and none of it matters anyway if my co-worker succeeds slotting in Python for that use case.
Similarly, there are many differences between bare shell scripts and Ansible, and there are differences between Ansible and Puppet and Chef too. But they're also close to each other, so if your use case doesn't hinge on those differences, you can be excused for wondering why can't we just keep using a Makefile for this.
Context: I wrote deploy and configurations in Shell in 1999, for Employer A. I started writing Ansible in 2013, for Employer F.
In the middle I wrote puppet and chef. All doing basically the same stuff. And yes, obviously there's worlds of differences. Using Puppet killed my productivity, for example.
But my point was that those tools are more comparible than 'terraform' vs 'ansible'. I stand by that comment.
thanks for responding -- I do agree that ansible, puppet, and salt are the closer comparisons (they are of the same lineage -- I mentioned them in passing and kind of clumped them together).
See my response here[0] as well, but to summarize:
- There is more overlap than it seems on the surface (not implying that you are taking a surface view)
- I wanted to get across was the way someone might ask if they were new/looked at it all as churn from the outside. The average dev who thinks "devops churns too fast" is not necessarily going to know the difference between ansible and terraform to begin with, never mind knowing that ansible/salt/puppet/chef are a different approach/lineage compared to terraform.
> And yes, obviously there's worlds of differences. Using Puppet killed my productivity, for example.
I never used Puppet -- it was love at first sight with Ansible for me, felt like the perfect amount of abstraction/structure even though some of the patterns were long in the tooth.
My career is not as long as yours, but I still use ansible to this day (with pulumi).
I mean, yeah? Ansible has roles, collections and a great inventory system. There's a lot of extra steps if you want to use a shell script instead, to i.e. install Docker on multiple hosts running different OS versions. I don't really like Ansible, but I think it's quite good for provisioning. It's also super easy to write filters and plugins for extra functionality.
I can see that Ansible with its do-it-yourself-AST-in-YAML is more tedious to write and even debug :-P No wonder they have shorthand versions of their module calls, too.
I'm still using it, though every piece of software lately, fad or not, is something I tend to endure or survive or cope with, rather than use or enjoy using. Little papercuts everywhere.
Hmmm i feel like those tools naturally railroad you into a certain split of responsibilities.
Sure - you could use tf’s local/remote-exec to do the stuff you’d normally do with ansible but man that would be annoying and slow going when you know you can reach for ansible.
Vice versa, you could have an ansible playbook to call the cloud provider’s api and cook up some kind of approval workflow / state management but man that would be annoying and slow going when you know you can reach for terraform.
Do they integrate that well? I don't have much experience with either, and until this moment I understood them to be substitutes - but still, I thought that:
> you could have an ansible playbook to call the cloud provider’s api and cook up some kind of approval workflow / state management
Was generally what people did before Terraform, and they be reluctant to add another complex tool, significantly overlapping with the current complex tool, just to simplify a part that already works OK.
Conversely, if Terraform is really so amazing as one would believe reading HN, then some amount of "local/remoe-exec" pain seems justified if it lets you avoid bringing in Ansible with all its complexity?
All these tools cost not just storage and processing, but also brain space. A Makefile in a cron, running some shell scripts, may be more brittle than whatever cookbooks or playbooks or ${what's the name for Terraform's flavor of makefiles}, but is also might be easier to fix when something breaks.
A great example of the narcissism of small differences [1]. Locally in the space of management tools, terraform and ansible are very different beasts, in that they can hardly even manage the same tasks. Globally, if you plot all software in the world on some big graph, they are right next to each other and indistinguishable on a plot that has "video games" and "databases" and "routing software" and "spacecraft code", etc.
> Terraform provisions machines (and also infra, etc), and Ansible configures provisioned machines
Ansible can provision virtual machines as well, and from my experience it is better suited for it too, since it doesn't save state outside the system. So there's lesser chance for it to get out of sync.
It's mostly culture. People these days learn GKE or Kubernetes first, and their tutorials mention Terraform so that's what they'll use. It used to be managed VMs and that culture was around Puppet, Ansible or Salt.
> Ansible can provision virtual machines as well, and from my experience it is better suited for it too, since it doesn't save state outside the system. So there's lesser chance for it to get out of sync.
State serves a purposes, it's not just there for fun. Ansible will lose track of your VM if you rename it (and can barely handle you renaming it with itself), and won't delete it if you just delete it from the configuration file.
> State serves a purposes, it's not just there for fun
The system has state. The question is how we want to manage it. The design in this case chooses to opimize to lessen the impact of errors. All real world systems are prone to errors in the long run.
If real world parables are excused: If the map (serialized state) and terrain (system state) differs, is it a good idea to insist on following the map?
> Ansible will lose track of your VM if you rename it
Sure, but so will Terraform. Or worse.
> won't delete it if you just delete it from the configuration
Sure it can. There are no limitations there. The only questions are whether you want it to, and how you want it to.
Ansible is a superset of Terraform. It's mostly a matter of culture which tool people choose, and which you think is the easiest to use.
> If the map (serialized state) and terrain (system state) differs, is it a good idea to insist on following the map?
If you're doing landscaping and the map is your requirements, yes. And that's what infrastructure is comparable to (hence Terraform's name btw)
> > Ansible will lose track of your VM if you rename it
> Sure, but so will Terraform. Or worse.
No it won't, because Terraform stores the underlying platform ID in it's state. It will just know there was a rename, and will inform you it plans on rolling it back to what you have requested in the configuration. It won't delete it. Ansible will want to recreate it, with all conflicts this could bring.
> Ansible is a superset of Terraform. It's mostly a matter of culture which tool people choose, and which you think is the easiest to use.
It's a different tool for a different job (at which it isn't great due to using a markup language which really doesn't lend itself to templating with lots of templating, and a dynamic language that suffers from version and dependency hell).
I have this impulse to constantly experiment with everything too. And I think it's healthy for your brain to always be learning new ways and testing your preconceived notions of how you do things. A lot of times it leads to incremental advances, either in the next iteration of my existing code or the way I approach my next project.
But a lot of it, I can admit, boils down to sheep shaving. When you start figuring out how you'd implement codebase 2009 in stack 2023, and then actually doing it, that's golf. It's all good, it's educational, but it's downright immoral to sell that to a client and drag them through it.
> But a lot of it, I can admit, boils down to sheep shaving.
AKA Yak shaving[0], which I am... unashamedly fond of doing
>> There are yaks to shave all over the stack, and I go where the yaks are.
(quote is from my github readme)
> When you start figuring out how you'd implement codebase 2009 in stack 2023, and then actually doing it, that's golf. It's all good, it's educational, but it's downright immoral to sell that to a client and drag them through it.
I definitely agree that selling this to a client when you've told them you'll do the minimum to update the codebase is immoral, but if there is value in updating, then you should expose them that value.
For example -- moving a client to AWS might not make sense, but if they're maintaining on-prem infra and paying tons of money for an application that would actually cost less as a single ECS service (or App Runner, these days)... Then maybe you should upgrade their stack a little bit.
In the end it's your job as a consultant or an employee (if you're high level enough) to introduce change that will make the company better, if you see an opportunity.
Tech stacks crumble underneath their own weight, and if they're bad enough, become business risks. If it's humming along, easy to maintain, and isn't riddled with security holes, then great!
My idea of a major overhaul for a client is upgrading from Mysql 5.7 to 8 with no downtime and then slowly, over a year, finding queries that could be sped up with lateral subqueries or window functions. That's about the pace, and it's effective at keeping resource use in check. The wilder things I experiment with on the side are once in awhile actually useful for a new app or service...
I basically agree with you, if the cost of building the code will be easily outweighed by the savings to the client and I also think the platform will hold them for the next 10 years, I'll pitch it [e.g. I've moved many early clients from my servers over to AWS services when it made sense, and still maintain them]. But it's easy to see too how if I were trying to constantly drum up more work, it would be seductive to pitch them on new hyped up technologies they don't need, just to remain relevant and keep the checks flowing. I think that's the animating force behind all of this platform churn, not excited developers who just want to sheep or yak-shave (thanks. I always forget which ruminant I'm shaving ;)
I believe there's a spiraling pattern, where the same ideas reappear every N years but with slight internal or ecosystem changes. I'm not sure if we're running or circle of if there's a real slow progress.
I think it's real slow progress, and sometimes even, quite fast.
It's the same ideas, but I will assure you that most of the large things that people get crotchety about are legitimate moves forward.
People were complaining about "docker" (i.e. cgroups + namespaces + some CLI UX) not too long ago and now you'd be dumb to try and deploy most applications without at least having that there.
How many people are wrangling their own cgroups and namespaces (not counting systemd, which is another thing people fought, tooth and nail)?
Should we all be doing 1 user per workload like the good 'ol days?
Or maybe 1 machine/VM per workload like the 'ol 'ol days?
I think it is mostly just adopting. To new hardware, new demands, new plattforms.
There is maybe some progress on a academic level, but I don't think we progress fast on the fundamental level, on how to write better code. There are slight improvements here and there, but it is mostly just different. And more complex. Or rather new tools are needed because everything gets more complicated (some say bloated).
That being said, the new thing that I am learning is general purpose GPU compute. That is quite new and useful (but hard). But it is an improvement.
Yes, that is indeed part of it. Everything changes, the ideas have to swim on top of that moving soil. And at times some integration occurs where a lot of ideas suddenly merge into a new cleaner brick.
I've watched this happen to so many coders in the land where people write code for companies that sell code. Personal advice: Write code for any other kind of company. They're much more resistant to fads. They don't care how you accomplish the new feature. They just want to get it done. And they certainly don't want to change their stack, ever, although in a new project you'll have more flexibility to pick the right tools than you would in a software company that's obsessed with the newest thing.
Having worked at both, the latter type of company also has it's downsides. Primarily: a) the inverse is also true...you'll be working on some old technology that only your industry utilizes and isn't well optimized for modern use cases or is extremely niche; despite there being plentiful use cases to make the switch, it'll never be "in the budget", b) your payrate is generally going to be quite a bit lower than industry standards at app/tech companies and c) you won't get the same treatment as an engineering focused company.
The latter two are very much variable by company, but those are the general trends. They'll also vary in importance to different people. The former item, however, can be just as burn-out/tedium prone as tech company stack whiplash.
Maintaining old systems and not getting the chance to exercise your hottest take on how they should be rebuilt is often tedious as hell. But I find a lot of comfort in it. I usually find that the future road maps that stick in my head for a year or more are the ones worth a 10-year overhaul, and I like to be able to tell clients that when enough of their stack is reaching end of life, we should do this major rewrite but I already am fairly certain they won't have to do it again for another 10 years. e.g. I have yet to deploy any production code for a client running on Nodejs, but I've written enough dependable systems in it on my own free time in the last 5 years that I think I might now finally advocate for it under certain circumstances, knowing the strengths and weaknesses.
It does help - as you point out - to be in the position where the non-tech company will almost always take your advice if you tell them they need to change platforms. But the most effective way of getting yourself in that position is to prove that you exercise that option only when it truly is in their best interests.
This is completely anecdotal, but I was far more overworked as an engineer at a non-engineering company than at any engineering company I've worked at. The entire reason I went back to that work environment was because I was tired of "wearing multiple hats" (e.g. doing three people's work) for 2/3 the pay of wearing one of those hats at some random adtech company.
I agree that it's dull but the fact that I still know ColdFusion 20 years later gets me a lot of work. No, it's not Silicon Valley money but it is Fairfax County money, which ain't bad.
> you'll be working on some old technology that only your industry utilizes and isn't well optimized for modern use cases or is extremely niche
But it's easier to get a company to upgrade from old cruft to a commonly-used technology, of your choice and which you push, over trying to get the company to drop a new fad they've bought into ("You know, we don't really need all of those Docker images, and VMs, and Amazon Cloud services - let's just write an executable which does the thing and can be configured.")
I work for a company which sells code and we are a counter example. We make gradual changes in the development stack and processes, every change needs a reason, and even though some changes are necessary, they will not be implemented soon.
Some possible factors are that the company is not that big and we have a feature list to deliver. We are not VC, so we cannot afford to burn money. The devs churn is low, so we don't need to be attractive to many people all at once.
TBH, I don't like it all the time, but it works and our clients are happy with the results.
Hah. That sounds like my dream job. A software company that's not controlled by VC, not burning money, actually shipping a product, and (presumably?) turning a profit... that's an actual unicorn these days...
This is all fine but... the rest of the world is not freezed. Let's say you are happily mantaining a stable, boring ERP app for your... cigarette lighters plant.
Out of the blue, one of your main suppliers declares they will accept orders only through a REST interface starting next quarter. And that you can/must use it to track your orders, instead of sending an email to one of their employees.
One week later, you are required to provide invoices in PDF instead of just printing on paper.
And your accounting system will switch to Oracle Finance in October...
Your ERP was developed on Progress ABL, and nobody really cared to port the old code to a web-based architecture because it was just for "internal use" and Putty was fine.
Well, I can't tell you how many dozens of times I've had to learn some stupid company's API and integrate it into software just because a client decided to do business with them. That comes with the territory, and you set up whatever you need to make it work. When it happens, it's a sprint, and you migrate everything you need to. But that's very different than migrating it because you're bored or want to sell the client a new shiny thing.
Of course, but I was just trying to explain that sometimes you are forced to innovate due to external interfaces/dependencies.
And maybe these were introduced in other companies because their IT depts. are run by "fashion of the month", but whatever the reason, you now have a problem. And if your sw and your skillset has remained frozen in jade since the early 90s... this will be painful and probably require at least one false start before getting it to work reliably.
I have noted a tendency in people who work always on "very stable" technologies to go in crisis mode as soon they have to (for example) produce a file in XML or JSON format after 2 decades of CSVs... this unfortunately almost routinely ends in stuff which is very brittle ("XSD? you do not understand, we need to produce an XML, ... I dunno what an XSD is or has to do with this requirement - except that they both start with an X, maybe - so please do not make things unnecessarily complicated and just go back to coding, we were supposed to start testing one week ago.")
I've worked for a small company that does an old boring ERP. In C. The oldest files in the library date to the late '80s. The whole thing runs on a dime, is super fast, and a few developers could manage the needs of tens of clients.
Implementing orders in REST was probably done in a couple of weeks of work. PDFs have been there since ever.
>>in the land where people write code for companies that sell code.
That is not always true, it also depends on who they are selling code do..
I know a few Orgs that by code from others, and pay $$$$ to ensure it never changes... or rather only changes in measured approved ways not drastic changes that would up in business workflows.
So I would amend your statement to where people write code for companies that sell code to startups or technology companies
if you are selling code to a 100 year old manufacturing company, or some other legacy space (healthcare, education, etc) then most likely they want stability not the new shiny
Another way is to work at one of the many places where XML (and related technologies like XSLT) make perfect sense both then and now. Working with transforming documents from one representation to another is a work that will not disappear anytime soon.
Have you ever considered a different portion of the tech sector? I work in embedded systems and this couldn’t be further from the truth. Things rarely change (because it took someone smart banging their head against a wall for a week to make it work) and if they do, they’re usually addressing real needs in the systems programming community. I see each change less as a fad and more as an additional tool in the tool belt.
Take for example Rust, which is gaining traction in parts of systems programming. It addresses the direct need to stop overflowing buffers, create more strong typing, write less code to accomplish the same thing, and others. Accordingly, a lot of systems programmers are optimistic and excited about it. But everyone in the field knows C, and acknowledges C isn’t going anywhere. There are tasks for which C is better suited right now, and will be for a very long time. There’s no push to rewrite all programs in Rust. Sure, some are pushing for it, but it’s not a “this year is Rust, next year is <new programming language>” dynamic. It’s a “this is where we want to go, but it’s going to take us many years to get there” type shift.
Stuff like this is why the notion that AI will eat all the software jobs in a few years is so ridiculous to me: who will maintain all the infra? Maybe AI will very eventually, but there's still a whole lot that needs to happen for the AI magic to magic, and it's not like VPs are going to be able to write implementable specs in enough detail for an AI SWE to implement them, wire them into the right branch / tag / whatever in the release system, communicate with stakeholders to plan a deployment, do the release, monitor for bugs / regressions, patch as necessary and port those into the dev branch, etc etc.
There's a whole lot more to modern SWE roles than just staring at an IDE with the job "implement a function that given X returns the square of X" type stuff that current AI can implement well.
Completely anecdotally, but the most productive I've ever been in a company has been was a company in 2016-2019 who was on a monolith .ENT Framework with no kube/container setup at all.
Sure, it felt like absolute chaos some days, but I"ve never felt more productive and in a flow as then, mainly because we weren't chasing the latest and greatest thing but focusing on what's working for us and perfecting it.
The sad thing is that in many cases working on such a system is a career killer. Nobody wants to hire you. Some years ago a colleague and I realized that our company wouldn’t hire us judging from the job descriptions because we worked on systems that had been working for years but weren’t shiny. Since then I always something new to every project. Not because it makes sense but because it’s good for the resume. Other groups are even worse. Three devs writing 13 micro services on Kubernetes for maybe a thousand users ever.
It's not a career killer, it's a specialization. Focusing on nodeJS JS/TS frontend+backend stacks will equally disqualify you from .NET/Spring shops (against an equally qualified applicant, at least). The same goes for making the switch to gamedev or systems dev from many years webdev (you can do it, but you'll probably be dropping back down to a Jr or Mid-level IC position), going from COBOL/Erlang/Rust to another stack, doing Salesforce (or other CRM/specialty software) development, etc.
It's going to happen to anyone after 10-15years in the industry.
You are right that this is specialization but it still is a career killer in the case that you specialize on a completely obscure technology which will not help in your future career.
I think this fear is not entirely wrong, but is significantly overblown. There's a significant gap between what job descriptions ask for and what companies will actually hire. A resume full of dead technology isn't going to help you ladder your way up to bigger and higher-paying roles, but it's also not going to make you unemployable.
A resume full of less-shiny but still relevant technologies, otoh, has a lot of potential. You have fewer places to look for jobs, but you're a lot more exciting to the places which do want your skills.
When I look for a new position, I try and make sure it’ll allow me to learn a new fundamental skill rather than the latest new framework (though often there’s overlap:)
Over 30 years that’s allowed me to be employable across a lot of different tech stacks.
After 20+ years of maintaining what amount to monoliths and services, I wonder whether I'd be hirable if I had to find a job at some big tech company. But I think the problem-solving ability you need to manage server clusters you built yourself translates into the ability to handle whatever their cloud junk looks like. Hopefully I'm never in that position, though, because I would absolutely hate a job like that.
I can only talk from my limited experience. The company I mentioned was my first workplace outside of university, and since then more than doubled my salary. So I don't think of it as a career killer, but instead a stable environment to get yourself skilled up in what it's _actually_ like in a dev role.
Around the time Angular came out with v2 and everyone basically realized that they either had to completely rewrite their Angular apps or spend the same effort to migrate to React (which was the new hotness at the time) I really started contemplating whether it's possible to stay sane while keeping up with the pace of change in both back-end and front-end paradigms. I ended up switching from full-stack to back-end dev. I don't know if it's just me, but now it seems like things have started to calm down and we're just iterating on existing problem sets and frameworks rather than throwing everything out and starting over. Does anybody feel the same? I actually think this is the best time to dive back into full stack dev.
I started in web dev almost 20 years ago (did some Visual C++ before that).
First job was everything XML. We had our open source CMS that stored all documents as XML and used XSLT to transform it into the HTML we needed. Was pretty cool, but there's always something that didn't fit that paradigm. Some coworkers went a bit overboard and used XSLT to generate the XSLT to create the HTML. But it all works as long as you cache aggressively.
Second was Ruby on Rails with JQuery. And eye-opener for me. Very cool to work with. Third job was similar, but using Groovy instead of Ruby. After that, I ended up in SPA-land: first Angular, then Vue, now React. Backends in either Java or Node.js.
I'm still doing that, only now everything needs to be in Docker and Kubernetes.
I still don't know why I should ever need microservices. Our GraphQL. We actually considered GraphQL for previous project, and decided not to because it looked way too complicated. It sounds like it might fit with using a Graph DB (I'm a big fan of neo4j), but it doesn't. At my current project, they made that decision before I joined, and they've since realised that they don't need GraphQL after all, but it's in there now, so they just work around it.
Some of the hypes worked very well for me (XML, jQuery, Ajax, SPA), others seem to work well for others but not me (docker, kubernetes), and some I really don't see the point in (microservices, GraphQL), but I assume there are use cases where they shine.
But don't blindly adopt everything. First check if it actually fills a need you have. And how it will fill that need.
Microservices are more an organisational thing than a recommended pattern, when a project grows and wants to hire more staff to increase its bandwidth then microservices are a great way to support that, the additional benefit is that they work great in a CI/CD setup so you can release seamlessly.
GraphQL from what I can tell supports web developers rewriting their APIs when they need to instead of waiting on back-end teams to support them.
Most of my projects have just a single team in charge of a single code base, so there seems to be no need for microservices at all, and yet some people keep bringing up that we should really look into microservices.
My impression is that GraphQL is great if you're building an api that's going to be used by lots of different teams with different needs and the api developer doesn't know what they are. Again, with a single team doing everything, there's no need for it at all. But we're using it anyway.
Also, they keep talking about a Single SPA Application, and I can't tell if that's just a redundant way to say SPA, or if it's really something new, or if they're just messing with me.
> My impression is that GraphQL is great if you're building an api that's going to be used by lots of different teams with different needs and the api developer doesn't know what they are.
This has been my experience as well. It provides flexibility to the consumers of the API to structure things how they want and fetch only what they need. E.g. if you add a new field, there isn't any impact on existing users to make any changes since they are not specifying it in their current request anyway. To certain extent, that also helps with things being somewhat future proof because even if you knew what the users want today, you can't predict what they will want tomorrow.
Having done a bunch of REST and GraphQL APIs, the only thing I would say for GraphQL is to avoid re-inventing the wheel and use a stable third-party library to do as much heavy lifting as possible so that you can focus on the logic side of things.
Somewhere out there in the known universe theirs cobalt developers that have avoided CSS, SOAP, Oracle J2EE, reactivesvelteinangularvue's, javascript servers, shitty facebook ideas, nine thousand line YML files, micro bullshit (microsoft, microservers), "containers" (and solaris zones before that) AND there's nobody pining to replace them with LLM models.
Did you ever try the Microsoft Visual Studio XSLT debugger from DotNet? The first time I tried it from C# was mind-blowing. Suddenly, it became possible to manage complex XSLT scripts!
I'm not young, yet I don't find any of this all that bad.
Maybe I'm an outlier, but I see refactoring and learning new tools as just keeping the blade sharp. Migrating to new frameworks, tools, etc. is just superficially iterative. I've yet to see some kind of massive paradigm shift blow me away and stress me out.
I wanted to share my ideas, but it just turned into a rambling:
The problem is not with massive paradigm shifts... actually we are yet to see something like that. The constant busywork on industrial scale is what the problem is, the constant reinvention of the wheel in a way that is somewhat still incompatible yet the same thing. Most change is only for the sake of change, for NIH-syndrome. when a solution for a problem gets half-decent, and almost as good as it was in the windows 2000 times, then it gets forked/dumped and a new half as good fad appears...
There is no problem with learning new tools, but learning a new UI framework every 2 years just to be constantly half-as capable as Delphi was (okay, with rounded corners and animations, I give you that) can be really tiresome.
Cloud vendors changing their line-up constantly so that your certifications expire, and they can remove the really good value for the buck services and push the shiny new globally replicated and 10x more expensive new stuff (which delivers value for many, but the barebones stuff also did for many others).
Just my 0.02$, probably saw too many lets rewrite in $current-thing and don't use $industry-standard-battle-tested-thing, as it is hard to learn it, but let's write one from scratch! (last time a team in a job decided that configuring HAProxy (my suggestion to get stuff done cheaply and quickly) is too hard, it is better to write a high performance reverse proxy, because it will be our thing! (I guess they were more interested in their resumes than in the company's interests). Since then that company is bankrupt... who would've thought)
I’ve been in that camp the last 20 years, and even yearned for a paradigm shift or two. But I have to say, the coming shift from formal languages, type systems and somewhat predictable failure modes to… “prompt engineering” has me a bit worried
Replacing weapons and keeping blades sharp is not the same thing. Now, the metaphor kind of breaks down because new weapons are sharp, but abilities with a new technology are typically not, i.e. you use new tech as somewhat of a blunt instrument until you learn the finer points of it - and until it develops and matures enough itself.
So, if you want to use "sharp blades" - don't switch technologies, but rather keep with the updates to those you already know.
I'm constantly sad there isn't any good writing on this topic.
Does anyone have any suggestions on anything to help me keep up with the basics of new fads?
I would love a monthly (doesn't have to be exactly monthly, but the point is occasional) magazine, which was the equivalent of those old magazine with basic tutorials, which would give overviews, and comparisions, of all the various modern techs floating around.
I do occasionally find a new person's blog to subscribe to, but I've never found a good general source.
>>when I started working in the industry, I realized that it's absolutely exhausting.
It's been this way forever. For every level of the stack. When I started the physical hardware hadn't been settled (coax, ethernet, token ring etc.)
Fortunately when I started we only got paid when we shipped. So we were motivated to ship, not be distracted by rewrites. We were also lucky that as a boot-strapped company we had no managers calling the shots. We made decisions and stuck to them, and it's paid off over the long run.
Of course the product has changed immensely over 25 years - but the core code laid down in 1996 is still integral. We've spent the time on the product, not worrying about the sexiness of the tech stack.
Which is not to say we've ignored the tech stack, it's evolved a lot, but the stack serves the product, not the other way around.
On the other hand we don't have to raise investments, we don't have to attract endless streams of new programmers, we don't have to juice a stock price, so we don't have to appear modern or hip. We can just keep doing what we do, making money, focusing on the value to customers, not being (easily) distracted by the next shiny toy.
And yeah, after a couple decades you learn to see through the hype a bit and distinguish things that are good for you, not just the flavor of the week.
Well, some things did bring improvements, or new ways of thinking about existing problems, and some things really moved the state of the art. Taking your example of token ring -> coax (with terminators,eek!) -> ethernet is arguably a good progression (and now also wifi).
But there's also lots of churn in software and people solving the same problems in slightly different ways.
However ... this is also collateral damage from another improvement, imho - that is the ceding of control of the tech landscape from big companies (IBM, Oracle, Microsoft, Intel, Sun, Borland) to smaller open source clusters of people and VC funding. There was a time when nothing happened without some big tech company behind it, and everybody waited for (particularly MS) to bless things and provide library support, IDEs and tools.
Those days are over, but the result is a bazaar of nosql servers, messaging servers, languages and front-end JS frameworks.
> Hype after hype, fad after fad, modern after modern, refactor after refactor
And you're supposed to already know how to do it and give an accurate ~~estimate~~ prediction for how long it's going to take to do (that matches the amount of time they've already decided you have to do it). You're a _professional_ and professionals don't need time to actually learn how to do something new, they just know.
It's because a lot of engineers are learning to become better plumbers, not better engineers.
Trying new technologies means you're mostly becoming better at using someone else's APIs - this is the path to eventual burn out as the churn continues.
There is a better path - See through the hype. Ask those around you what's the downsides to this approach? And you'll often get blank stares... Why? Because they don't know either - And if they don't know the downsides, they don't really know. They are following the hype curve.
Focus on the fundamental engineering principles and asking better questions - take the bottom-up approach and the reward is you'll find teams that aren't taken by the hype curve so easily.
PS. There are a lot of good technologies that come out, but staying behind the hype curve a little helps you make better judgements over time.
I checked terraform and lambda stuff in 2016 and then decided to continue with ECS bcz as an entrepreneur i find it to wonderful and time and money saving you know.
Easy to deploy and also you can utilise cheap spot instances to fullest.
Wow, this resonates with me! I stopped doing full-stack development a while ago, and have since January been contributing to open source projects as a contractor (my next gig will be for ISRG, the makers of Let's Encrypt, :D). By working on open source libraries I get to tackle interesting problems without being bogged down too much by accidental complexity. It feels like the ideal solution to me (though it is difficult to replicate, as it takes quite some luck).
I can imagine this is frustrating and have experienced people wanting to push the "latest trend". My usual way to approach this is to ask them to explain to me the _why_. Why is this new thing better? What specifically is it better at? How? What trade-offs does it make?
More often than not, they aren't able to properly articulate the why and the trade-offs involved. Sometimes they do fully understand the trade-offs and that leads to good discussions.
I have a partial solution to this I'm happy pretty happy with.
First, is that I dove farther into CS fundamentals. This has made new fads a lot easier to deal with because usually they're recycling old CS ideas. React hooks is a good example. I wrote class components for many years, adopting to hooks took me all of a day because Algebraic effects were something I understood well.
Second - I am firm about which tech I do and do not want to learn the specifics of. I understand the high level of kubernetes but will not be learning specifics.
This is a benefit (non-commercial) FOSS has over software authoring in commercial corporations. Developers get to call the shots, and usually, don't bend to such passing fads.
> I don't even know how I would do it.
Depends on the software you're writing of course, but the core would probably be some libraries with C (and maybe other language) bindings, some command-line executables, and anything else - above that.
After a decade I think I’ve concluded for myself that this is, for the most part, a self-inflicted problem. There’s really no need to keep refactoring or feel like others need to stop inventing new ways of doing things.
I think the computer industry is in its infancy and we haven't really found the right solution to many problems, for everyone to adopt the same thing requires it to be a very good option. BSD sockets is used by everyone - it's that good.
I've been in the industry for over 30 years and I absolutely get where you're coming from. It is an ever changing field and it can be exhausting. I hear this repeatedly from many coworkers (and workers from the same field).
All I can give is some advice from my lengthy experience.
Most of the changes happen on the "edges". Those edges are primarily three fold:
First, the user interface: everything that deals with input parsing and formatting. This also includes protocol marshalling (Http, JSON, XML, ...)
Second the data persistence. This involves choice of database or any other persistence. Including serialisation of your data. Any I/O really.
Third, anything to do with deployment. This includes environment isolation, configuration management, process automation and so on.
These the edges have nothing to do with your core application logic.
You can reduce the impact of a switch to a new stack by inflating your core logic and shielding it from the rest of your stack.
This is tricky and I still catch myself all the time doing it wrong. Because doing it wrong is tempting and easy.
For example, consider a connection to the db: you will need a connection string. You can fetch that from an environment variable somewhere deep in your core code. But by reaching out to the system like that, you make that core piece of code dependent on the deployment strategy. Instead, you can move that part as close as possible to the entry point and then pass it down as function/class arguments. You can group similar operations together, close to the entry point. That way, if the dependent e strategy or the stack changes, your core code does not need to change.
Another example is the case where your core code is dependent on a library from your stack. For example, when working with python and flask, it's easy to import "request" or "g" from the framework. But every piece of code relying on that will be tightly coupled to the framework and subject to change whenever the framework changes (either same framework with breaking changes, or framework switch). The solution is the same. Move those elements as close as possible to the entry points. In this example of flask, keep this as close as possible to your routes. But also keep your routes clean of core logic. Interact with the HTTP layer only and pass the values as arguments to your core.
Finally, a similar situation, but much more sneaky and hard to spot is reliance on data types from your stack. Spotting cases where you rely too much on library modules can be done by investigating imports. But reliance on data types is harder to spot. Sticking with the flask example: the http headers are encapsulated by an object of the flask stack (werkzeug in this case). You may be careful and extract the headers in the route to follow my earlier example. But if you then pass the headers unmodified down into your core logic you will again be coupled to the stack. Instead you should extract only the values you care about and hand only those down to break the coupling.
Finding the right balance where to cut the coupling is challenging. The more you cut, the better you will be shielded from changes. But the less you can benefit from pre existing implementations from the stack. Where to cut is ultimately a design decision.
When done well, it will be far less frustrating to jump on the bandwagon of the latest tech. Not completely painless but far less stressful. Not least because you will have confidence that you didn't accidentally break core logic.
Doesn’t this strike anyone as overly cynical and just… incurious? Yes, hype and trends are obnoxious, there are individuals and organizations that reflexively seek sexy tech and apply it wrong, but isn’t this also part of how we find things that work and things that don’t?
It’s easy to run through decades of tech trends and present them as the only things that dominated the industry, just like it’s easy to rattle off one-hit-wonders or movies that flopped and claim that the arts are dead. But I remember Apache fading in favor of nginx. MySQL being shouted down by Postgres. Frameworks like Rails and Django becoming popular over the LAMP stack (with the aforementioned A and M…). Docker over Vagrant for dev environments. TypeScript, unidirectional data flow, the decline of OOP. I can go on and on and these are just the ones that spring to mind immediately!
I’m still very much a “choose boring tech” guy. I also sometimes feel frustrated by how fast technology changes and how the winners of hearts and minds are often the result of marketing effort, not good technology. But unless you’re a technology blogger, you probably don’t need to be bothered by this. New tech emerges for a reason. Some of it will be overhyped, some of it will be unfairly ignored, some things will win and some will lose. I feel fortunate to live in a time when there’s so much enthusiasm and creativity for new ways to solve old problems.
I don't know about incurious, maybe "extreme". A lot of the tools, techniques, etc. mentioned in the article aren't appropriate for ... well, most websites probably. But they do have their roles to play in large, complicated systems.
I worked a decade in public sector digitalisation in Denmark, where for some reason they still use a lot of SOAP and thus XML. I’m not so against XML in theory, but I hate it in practice.
You’d have these completely over engineered solutions where you’d basically need to call a separate micro services for every “field” of anything. So if you wanted a name, a ssn, and, an address for a citizen you’d need to make several calls. We ended up buying an API for the APIs, but that is besides the point I’m trying to make. The point is that it was very designed, but the XML. Well… you’d have things like:
<ssn ssn=“123”/><id>123</id> and worse. And it’s like, why would they spend all that time designing micro-service and data architectures and then output really inconsistent and often even standards breaking XML? But I know why, because out of the maybe 300 different systems to output XML that I’ve worked with over the years, none of them did it “right”.
If the future is XML then we better step up our game!
I think "XML" is way too broad to meaningful discuss anything.
XML the data format is pretty great in its basic form. Not space-efficient, but unless you're dealing with bulk data, the inefficiencies don't matter unless you're googlescale or can't spare a millisecond. The format itself doesn't absolve you from the design work of your model; your ssn example is typical of the "CEO said we have to use XML now" rush-jobs at the peak of the SOA hypecycle.
XML data validation through XSD is a dream. The insistence of the JSON-people to sabotage any kind of data validation is absurd to me.
XML Webservices are easy and consuming them from a WSDL is easy.
SOA on an enterprise level was a stupid idea creating a giant expensive communication middleware with the ESB without delivering on the promises.
Every other thing in the XML space is brainfuck-level overcomplicated bullshit dreamt up by bored enterprise architects. WS-* is a cacophony of overcomplicated solutions to problems that only exist because enterprise architects dreamt of having a single data model that's inter-operable between companies.
JSON schema aren't part of the JSON definition, and JSON itself suffers from lacking good handling of types such as dates.
JSON itself succeeds as a way to represent JavaScript objects but it is far less useful than XML for representing custom types, which are easily enforced through XML Schema.
JSON Schema adds some of this, but it isn't a standard (yet).
Having written a bunch of XML schemas and a bunch of JSON schemas at the same time while integrating multiple third party APIs, I can unreservedly say that JSON schema is awful.
It just about works, but it's horrible to write and horrible to maintain. What you can actually do with the validation is surprisingly limited in some forms and the fact that you have to write JSON to write the schema shows just how non-user-friendly JSON is.
A friend told me that she writes JSON schemas in YAML and then uses a build step to downgrade that to JSON for the actual validator. I wish I'd thought of that.
> XML Webservices are easy and consuming them from a WSDL is easy.
Maybe in theory, but I've never had a WSDL that was actually complete. In the end, it generated stubs of types and clients for me and I still had to do a lot of nonsense. The concept of a WSDL would be nice but in practice I've seen _so_ many people do them poorly and create a bad experience.
> Maybe in theory, but I've never had a WSDL that was actually complete. In the end, it generated stubs of types and clients for me and I still had to do a lot of nonsense. The concept of a WSDL would be nice but in practice I've seen _so_ many people do them poorly and create a bad experience.
In a previous life, I shipped many, many, many APIs that used consumed and/or delivered XML data. We'd ship WSDLs if asked, but it was far easier for customers and from a support perspective for us to use the WSDL to generate API clients ourselves and then add some of the necessary supporting bits like authentication. We'd ship these to customers and it made life easier for everyone, despite the overhead.
That assumes that the XSD is accurate. I'm currently working with a vendor that has probably the worst XML-based API I've ever seen and the XSDs they provide often don't provide 100% coverage and have bad complex type declarations, etc.
I also have experience with the digital services of the Danish public sector. Trafikstyrelsen's SOAP service validates its own responses against an xsd. The data in the motor vehicle registry does not always conform to the xsd. In which case it returns a SOAP fault with the original response embedded as an escaped string in the details. So to actually get the response we need to parse it out of the SOAP fault...
Another piece of insanity is the e-TL (elektronisk tinglysning) SOAP service. Getting a non-java client generated is a PITA because of the use of the java Catalog feature. Even after hacking around it by setting up a local webserver and liberal abuse of the host file the generated .net WCF client needed manual fixes in a lot of places.
After doing that your calls still fails. It turns out the service wants the client to tell it which xsd to validate the request against (?!?). Also did I mention that the service expects every request to be signed, even when only reading information - it doesn't do anything to validate the signature though.
I agree that there is nothing inherently wrong with XML, the problem is with the way it is used. I have worked with great XML schemas, and terrible ones also. Also the XML APIs in the programming languages are a bit dated, and oftentimes a bit cumbersome to use. Yet the standard API to support not only DOM style, but streaming parsing is also a great thing, should you have to process several gigabytes large XML files at one time on your career (as I had to).
> I agree that there is nothing inherently wrong with XML, the problem is with the way it is used.
Right, but the spec allows for so many different ways of using it, that you end up with lots of variations in how it's done.
I've mulled over these choices myself when trying to come up with a XSD spec, and I wish I could slap that idiot who wrote the last XSD spec around with a large trout (that idiot being past me).
While you can do a lot of weird things with JSON as well, the syntax is more restrictive and naturally leads to fewer variations.
> If the future is XML then we’d better step up our game!
I don't think you understood the meaning of the article.
It's talking about how XML was, at the end, a red hearing and not best solutions for everything as it was tooted by the industry.
But the issues you describe isn't XML's fault, it's poor oversight and lack of consistency from developers. Does anyone have an architect or API designer hat at your company? They should be responsible for designing the XML structure, setting rules on where values go, etc.
This is an issue that you will have with every kind of data transfer / representation tech, be it XML, JSON, protobuf, etc. Someone needs to have ownership, someone needs to set a standard, and everyone working on it should stick to that.
> You’d have these completely over engineered solutions where you’d basically need to call a separate micro services for every “field” of anything. So if you wanted a name, a ssn, and, an address for a citizen you’d need to make several calls.
The official CPR API you can get all of that information with a single call and I think it returns JSON. I'm not sure when that API was released, but I think it was quite a while ago. There is one for companies as well (CVR API, but technically it's just an Elastic Search query iirc).
Not in Denmark only. In some projects across the EU I only saw XML/SOAP used for the so called API access. I really do not understand the XML fetish still going on in 2020+.
I’m (and many others) are using XML via the PreTeXt[1] project. Because of the strict schema, the processors are able to transpile from xml source to LaTeX/pdf, HTML, epub, and more.
The syntax is obnoxious, but I love the results!
At the tax authority we could query that directly from a database.
But yes, the public sector suffers from countless different systems that have accumulated over the years. It happens because everything has to go out to a public bid and throughout the times it has been considered anti competitive to built things in-house.
More than that, the vendor winning the bid will happily use the opportunity to latch itself to the flow of taxpayer money. That's a big reason you end up with XML hell - the systems are designed to make future interop impossible without going through the vendor.
In fact, wasn't Denmark a case study of that? I remember there was someone on HN a while ago, IIRC working in public service of some country in Europe, who posted stories about their constant fighting with a vendor who intentionally makes it very hard to have any kind of integration and data exchange between various administrative branches.
Some thoughts of why software is so hype prone and will likely remain so, if not accelerate
* Its intrinsically easy to come up with new approaches. Thinking and writing software is a mental process, it is not limited by physical constraints and messy manufacturing.
* The scope of use contexts in society exploded. You only needed the formula-translation language when you had five whitecoats in a research lab punching instructions on a single computer. Now you have multi-billions of devices.
* The process is self-feeding. With the internet came the development of online interaction tools, techies use them more than any other segment and this creates a large coherent mass of lemmings and network effects.
* The role of bigtech (oligopolies solving their own problems to maintain / expand their dominance and abnormal profitability) which creates a collective osmosis to imitate whatever they are doing.
* There is also intrinsic redundancy and near equivalence of solutions. You only have one way to roll a concrete something on a surface: it must be round. But one can have an infinity of ways to split a workload between a client and a server machine (let alone more servers).
Most of those factors will continue for the foreseeable future so there will be no respite. E.g., Now we are firmly into AI hysteria and this again already shakes things up as it deprecates/de-emphasizes patterns that are not part of the bandwagon and puts emphasis on obscure niches that are deemed critical.
> * The process is self-feeding. With the internet came the development of online interaction tools, techies use them more than any other segment and this creates a large coherent mass of lemmings and network effects
I feel it's mostly this; there are a few people actually creating these 'new' (most are just the same thing without little benefit, however, some people are better talkers/pr/marketers than others and so win a lot of souls) technologies and then they get many followers with more loud voices than brain who also start to shout about how everything before this was crap. A lot of effort in software is destroyed for not much reason but resume, ego and opinions. I like software that runs without issues for 10-20 years; it keeps me sane. As with all this tech, not many companies need it but their internal 'gurus' push it along anyway, wasting money and time and speeding up the rot.
Also: software (at least the kind of software that results from hype-driven development) is much less dangerous, and there's a lot less that can go wrong if it fails. If you're working on a new kind of process for making some food ingredient you can easily do it wrong and poison someone, so people are naturally more cautious about trying new things. It's the same with software for e.g. nuclear reactors or plane controls, which is why we don't see them being written with microservices and Typescript.
I think you left out the main factor which is that programmers are people and people subconsciously copy what other people are doing. We are herd animals. Also most people are not really good at judging novel things and are mainly influenced (subconsciously) by surface appearance and popularity.
I fully agree with that. I just focused on the more exaggerated ways this behavior shows up because programmers are the first tribe to really become fully online and interacting.
For software people it is sometimes easy to forget that in most other professional domains exchange of real information, detailed opinions etc. is still much more siloed and inaccesible. It does not happen out in the open. This has a dampening effect as copycat behavior spreads much slower.
Its all about consideration for the audience. Those most lured by hype tend to be those least qualified and concerned with doing the work to validate such hype. That means you have a ready audience of low hanging fruit ready to consume shiny new things if the selling point is easiness.
When improving things leaders, actual leaders who know how to lead, invest in process. Individual contributors looking for to improve things with easy invest in product hype. The difference is striking.
It's easy to just mark XML off as one of the many fads in tech given how many there are.
But I think it's much more worthwile to dive into the particulars of why it failed. While they're all fads, some of them share commonalities that, in hindsight, I think we can say were major catylsts to their downfall.
With XML, it failed because something like JSON was much simpler. Time and time again I see people saying that json+comments would be the ideal format. Time and time again we see that people prefer simple solutions that they can get jump into easily. Maybe certain problems don't require solutions with those traits, but anything with sharing data among a diverse group of consumers does.
The XML ecosystem with things like XSLT, Xquery,etc, were elegant, but also overengineered and clunky. I would even say that markup itself with closing tags were more annoying than just closing brackets.
Software is still so young... I think it's good that we tried and learned something from it in the short term. Maybe one day we'll get a proper standard with all the right data types that becomes the defacto solution that people learn it even if it's more complicated.
I still can't believe that we (as a profession) kinda traded away schemas, namespaces, comments, sanity and well defined dates basically because some kids were too cool to write out closing tags. My early 20s teammate now closes blocks with
} //for
in his JS. That's even worse than XML!
Its incredible to look back at the arguments against XML and XHTML and realise just how poorly thought out they were.
Now, we've wasted the past few decades re-implementing everything that XML provided, but worse, and will spend the next few decades doing that all over again.
99% of the arguments in this industry are straw men or false equivalencies back by nothing but assertions and gut feel. There is little engineering in the field.
Personally I just miss Dates. Even if I dont really like not having comments, it is probably for the best, force software to have proper documentation.
That's all fancy fluff. It's just optional benefit, we don't need all that in most cases. So if we add the flaws of XML into the evaluation, it naturally loses in most use cases.
JSON has schemas. Namespaces are simple, just add something like "namespace": "mynamespace" if you really need it. Comments are not really needed for machine readable formats and many JSON parsers allow comments if you need it for configs, so that's hardly an issue. Well defined dates are strings in ISO8601 format. It's not standardized with JSON, but not an issue I encounter on practice.
JSON itself doesn't have support for scheme of comments as far as I'm aware. Both are supported by individual rolls building on top of JSON rather than those features being part of the JSON spec.
Whether that stiffener matters to someone is up them, but that difference is what many comments on this thread are pointing out with regards to us having spent a decade rebuild many features of XML back into JSON.
XML does not have support for schemas either. XML Schemas is a separate spec (and actually not the only one, there were few competing ones).
I'd use JSON schema over XML schema any day. And I read and wrote many XML schemas and still do sometimes. JSON schema actually built by sane people. If that required a decade, so be it.
>but I think it's much more worthwile to dive into the particulars of why it failed
If you’re going to deep dive, define your terms: what did it fail at?
At being the future? Kind of a straw man, even if some people did argue that.
As a document interchange format? Between RSS — including podcasts! — Microsoft Office, and LibreOffice it seems to have done reasonably well.
As a data interchange format? Undoubtedly JSON is now preferred for this, but keep in mind that for several crucial years, as the “X” in “AJAX,” it paved the way for modern web apps via Google Maps, Gmail, and before that Outlook for the Web (literally the original AJAX app).
I think it’s not really interesting to examine how and where JSON displaced XML given this is not hard to grasp and well covered. Much more interesting to extract lessons that also work on JSON and predict what might displace it and where and how. For example it’s interesting that JSON is simpler than XML. Might it have discarded something valuable? The “X” is for extensibility - if we could extend JSON, which we can’t, how would we? First class date times might nice, mmm? Or to be able to add arbitrary other self describing data types? UUIDs?
To stay. Yes, it's still used. But which new projects are still using XML? Most usage now are remnants from the golden age. Others are old companies who build their culture with XML, and continue to use it, but that's it. XML is not growing anymore, and slowly dying. It's quite like RSS, not dead yet, but not prospering either.
> This was the good-old-days when critical features were crammed in just days before a release
Fascinating article, but really? I would have thought "cramming in" critical features days before release was more common these days, esp. if you're releasing/deploying to production on a daily (or even multiple-times-a-day) basis.
> esp. if you're releasing/deploying to production on a daily (or even multiple-times-a-day) basis.
What does releasing regularly have to do with cramming stuff in? We release when stuff is ready. That could be multiple times a day or once a week.
I think of people with set release dates as the most likely candidates of "cramming stuff in" because they fear they will miss the release and it might be a month (or worse) before the next release
Hence the quotes around "cramming" - it's now standard practice for the commits for a new feature to be merged into the mainline days before building a release. I can't say I ever remember doing that 20 years ago - there was always a code freeze period reserved for critical bug fixes.
To offer a different perspective: I find XML simpler in that it tends to correspond more closely to most arbitrary data models. From ADTs to C structures, there are product types and sum types to encode, along with more or less arbitrary basic types, which have text representation. With XML you get a straightforward way to encode all those, while in JSON there is no single agreed upon method to encode sum types, but you get something like string-indexed arrays (or hash tables) instead, which are normally implemented on top, using simpler types (and are not restricted to strings). And a few arbitrary built-in types, but you have to use strings for others anyway. And no built-in extensibility, so ad hoc hacks are used when it is needed. It would be particularly awkward to use for documents, too.
I preferred JSON initially myself, since it seemed a little less verbose, I did not care about extensibility, did not consider its usage for documents, validation, did not care about using it for different data models, and it just seemed simple (FSVO) and straightforward. But then more of XML made sense. It is not perfect for everything, either, but the decisions behind it seem more justified to me than those behind JSON (though JSON still fits JS, at least).
I disagree. XML isn't that complex. I think it failed because:
* It's overly verbose. All those closing tags, ugh.
* The data model is not actually what people want most of the time. People are transferring objects, not documents. The fact that XML doesn't have a proper way to represent maps, and you have the redundancy of attributes, inline text, etc... There's a huge mismatch between the data model of XML and the data model that people usually want.
* SAX is obviously a shit way to do parsing. There were DOM parsers and pull parsers but for some reason SAX was stupidly common despite being stupidly awkward.
> With XML, it failed because something like JSON was much simpler.
I remember when XML was pretty new and I used this new-fangled technology called XML-RPC. XML-RPC was amazing and I was using it to connect desktop applications to web applications. If you go look it up, you'll notice that bears a striking resemblance to JSON.
But what technology actually took off for RPC in XML? SOAP. And SOAP is a nightmare of complexity and hardly works right the first time between heterogeneous systems.
It's funny how much people want to add complexity to the JSON ecosystems with the same over-engineering that killed XML in the same space. Luckily the design of JSON is such that it resists that kind of complexity and also because XML exists it takes a bit of that load.
XML can be simpler. In C# you could generate an XSD from a good XML example, then fix it up and generated typed, nested C# code. You now have a builder and validator for the stuff you send across the wire.
You can sort of do the same with JSON though to be fair, in theory, and there are probably tools, but it wouldn't be as tight and then the "XSD" part I am not sure if there is a single spec to go with.
We use XSDs to generate JSON Schemas. The schema converter also generates code to convert JSON files to XML, so we can use our existing XML infra while we migrate.
As you say both can be used to validate input and to generate typed code.
However I find the XSD design tools better, so I expect we'll be doing the XSD first approach even after all our integrations are JSON native unless something radical changes there.
IMO XML failed because of mismatch between programming language structures and storage format. JSON is perfect because it maps 1:1 to arrays and objects. XML does not. There were whole ORM projects to map XML to data structures.
That's a fundamental issue and replacing XML was not that hard.
My prediction is that SQL will fail too. There are infinite attempts to dethrone it. And the reason is the same: tabular data does not map well to our data structures.
Dethroning SQL much harder, though. It'll take decades.
We introduced fundamental roadblocks into IT and river of time will either break those roadblocks or smooth them into shapes hardly recognizable (SQL + JSON, for example).
> My prediction is that SQL will fail too. There are infinite attempts to dethrone it. And the reason is the same: tabular data does not map well to our data structures.
I'm deeply skeptical.
The relational data model is fundamentally about correctly representing relationships between data. Whereas data structures are about efficiently using computer hardware to perform computation on that data.
I don't see either going away, because they're driven by fundamental needs.
SQL is a pretty decent language. Sure it could be improved and made more intuitive and maybe even simpler. Relational data maps directly to objects if you have halfway decent tooling. SQL is almost 50 years old now - it'll be here for a long time because it is actually useful paradigm that has not been outdone by other paradigms (so far).
> My prediction is that SQL will fail too...And the reason is the same: tabular data does not map well to our data structures.
The whole point of SQL is that relations and relational data are not data structures. They are higher level concepts and data structures are implementation details. The reason it has survived 4+ decades is because it isn't intrinsically tied to those implementation details and instead can be reimplemented over and over and over.
> There are infinite attempts to dethrone it.
Viewed another way, there are infinite attempts to reinvent it, all of which keep failing, because they aren't as good.
To dethrone SQL, you won't cut it with just average techbro hype. You need the top award of SIGMOD renamed in honor of your paradigm's inventor levels of hype.
> With XML, it failed because something like JSON was much simpler.
No. XML was introduced as a simplified SGML subset/profile to become the base for new markup vocabularies on the web; both SVG and MathML were specified using XML (and later integrated into HTML 5). The intent was also to replace HTML by XHTML.
The idea to introduce service payloads as custom, non-UI XML and then transform those payloads into XHTML was induced by XML, but distinct from it.
However, W3C went crazy with SOAP and XML (and RDF), attempting to establish entirely new and unproven paradigms such as XForms with XHTML2 rather than merely simplifying syntax, which was bound to fail, and laid ground for browser vendors to evolve HTML outside W3C.
Then XML heads somehow felt insulted, refused to change course or learn new things. Most even didn't realize XML is just an SGML subset, and that everything that was possible using XML is by definition also possible using SGML (plus handling HTML and markdown and a couple other things making SGML more complex compared to XML). To this day, we're hearing XML heads dogmatically advertising their overly strict and verbose red-headed stepchild of a markup language, and wondering why nobody wanted to bow to XML. The article is about this kind of people who want to use their tool under all circumstances, project requirements be damned.
Stuff like GRPC / GQL feels cool, but JSON still feels "closer to the wire" and easier to work with using normal text-based tools / curl / etc. The internet felt like magic growing up once I learned about SMTP and HTTP using telnet and played with quite a few serial terminal devices. Now with many APIs behind OAuth or worse, it's getting much less trivial to hack around with in simple ways.
> Stuff like GRPC / GQL feels cool, but JSON still feels "closer to the wire"
This is like comparing a Mustang engine to a racetrack. They're not even equivalent technologies.
GRPC and GraphQL would equate to SOAP, REST or some other interface protocol. JSON equates to protobuf, XML or another transport format. Of those three options, I have no idea how JSON would be considered the "closest to the wire". Protobuf is definitely the lowest level, while XML and JSON are about equivalent in abstraction.
And specifically XHTML failed for two reasons: (0) Internet Explorer did not support it, and (1) there was no good migration path out of tooling that generated HTML by unrigorous string concatenation.
So instead WHATWG created HTML5, which bastardized well-designed (if clunky to use) XML features like namespaces, paving the way for vulnerabilities like CVE-2020-26870.
XML is very much misunderstood. The hype that surrounded it was for a good reason, because XML is somewhat unique as a concept. There was nothing like it and still isn’t. It is not a data format or something like that. It is a notation tool. Normally you invent some syntax and parse it to get what is called “abstract syntax tree” (AST). With XML you work directly with an AST. Parsing from text is convenient because you can get a rather concise and elegant result. XML is normally way more verbose, although not that much, if well done. Yet the expressiveness is exactly the same.
Notation is what you need when you manually compose some data for machine processing. XML as a data interchange format is a misuse. Yet XML as a data description format is what it is very good at. The difference is that data interchange goes from one machine to another, but data description is what goes from a human to a machine. Data input, in other words. Complex data input. Language-like data input.
This is why XML is widely used, for example, in user interface frameworks where you need to describe very elaborate data structures. Markup is another obvious example; here you also have complex data structures tied to a piece of text. Yet markup is just a special case.
So if we are to add something to this article’s sentiment it could be an observation that we are also prone to misunderstanding things and jumping too quickly to conclusions.
> XML is somewhat unique as a concept. There was nothing like it and still isn’t.
That's just incorrect. XML is a proper SGML subset, nothing more. Why do intelligent people like you come here to lecture about markup languages but don't even bother to read the XML specification which clearly states (as in chapter 1, sentence 1):
> The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML.
Indeed it is. But being simplified and separated from SGML it somehow revealed a clearer idea of what a notation is. SGML is a markup language and as far as I remember it (not too well), it never was disassociated from the text; maybe it was possible, but not widespread. XML without text content is perfectly usable and is even more convenient.
What you're saying is that you were first introduced to these concepts when XML came up. Totally understandable given the hype, but I can only suggest to look deeper, where things become interesting. For example, have you ever wondered about the reason for XML's obvious, excessive redundancy requiring matching end-element tags to be specified in full, when "</>" (as is possible with SGML) is sufficient given that XML doesn't have overlapping markup (as in SGML CONCUR)?
As to use of markup for non-text, I have to disagree. Markup is precisely for rich text. For representing discrete data there are much simpler and more compact alternatives not involving attributes-vs-elements decisions and hierarchical addressing of nodes which don't make sense where there isn't a concept of "rendering" a document to a user.
I precisely advocate using non-text XML as well. It is not to represent data per se, but to represent it in a human-friendly way. To represent data for a computer the best way would be to serialize in some reasonable manner a subset of a database:
That is all. Truly, a very good form, suitable for anything, not biased in any way. But it would not be easy for a human to author such a subset without quickly getting lost in tables and records. Yet somehow humans can write gigantic volumes of code. So it is not that it is difficult for us to type or something. What makes writing code easy compared to describing a set of tables?
Writing code follows some structure, the grammar. And grammar ends up in an AST, which thus exists there all the time and is the only thing that gets to the computer; text is only the medium. Let’s render AST as XML and see what remains if we drop all text. I see the following: 1) element type and attributes, 2) names and references, 3) an element encloses other elements, 4) elements come one after another. The first two are present in the database form as well, but the last two do not. I think composition and ordering are precisely the tools that let us to implicitly embed some information and make it possible to author large amounts of structured data.
(Textless XML by definition has no elements-vs-attributes problem. All goes into attributes except things that have to go into elements. Usually it is not hard to decide: they can repeat, or depend on order, or appear elsewhere; very similar to database normalization.)
You could say the same about JSON. It is arguably closer to how and AST is represented in software as it does not posses the two dimensional notation like XML does.
The unique thing about XML is that you can both have children and attributes. My guess it that this is to model OOP-based systems: Attributes are for the constructor or a certain class while the children represent dependency injections.
This is IMHO the weak point for XML: It gives too many levers. when we don't know how to assign meaning the the levers, we arrive at garbage like the example from another comment: <ssn ssn=“123”/><id>123</id>
"My guess it that this is to model OOP-based systems:"
No, SGML which became XML is a separate independent track from OOP. They both set themselves up in some concrete before they really encountered each other and the contact was a mess. This is part of why the DOM, especially the first couple of iterations, are so messy (reading DOM 1 is almost hilarious in hindsight, if you know what you're reading for [1])... it doesn't help that the DOM also smashed into yet another tech line, the dynamically-typed scripting language, face first. The hasty three-way committee-arranged shotgun marriage in the late 1990s between these techs produced a fairly dysfunctional family.
[1]: https://www.w3.org/TR/REC-DOM-Level-1/level-one-core.html Notice this API, primarily used by Javascript, is specified in Java, complete with specifications of checked exceptions. Total clusterfuck. And a prime example of just how hard Java was inorganically jammed down the programming community's throats (which I say without regard to your current opinion of the language, it did grow up certainly, but the initial push was completely inorganic); this standard is 1998, with Java 1.0 release 1996. Java 1.2 (or 2.0 depending on how you look at it) was released in 1998, with such notable features as... the first Collections support in the library. This was not a language mature enough to be writing specifications in yet, even ignoring that you shouldn't be writing specifications like this in a specific language anyhow, especially since it was obvious and known it was going to be cross-language anyhow.
I don't agree that the horrible XML example is the result of that.
There's a simple semantic difference between attributes and elements in XML:
* there's only one instance per attribute
* attributes are atomic (i.e. have no children)
* attributes are order-independent
So whenever your data is atomic, the order in which it appears doesn't matter, and you only want one instance, (per element) you'd use an attribute. The reasoning behind this is to enforce basic semantic rules without having to resort to complex schemas like XSD or RELAX NG.
It has nothing to do with OOP: it's just a very basic tool for enforcing basic constraints. As with every tool, it can be misused or ignored. The XML example is the result of incompetence and/or lack of coherence in data modelling and processing (judging from the micro services mentioned), not a weakness of XML as such, IMO.
> My guess it that this is to model OOP-based systems: Attributes are for the constructor or a certain class while the children represent dependency injections.
Can't say I've ever thought of it that way - attributes just seemed like a simpler syntax for the common case of basic properties that were sensibly represented as strings (i.e. single, literal values). I don't think it would have made much fundamental difference if they were never part of the spec and you had to use sub-elements to define such properties, except perhaps for the fact an attribute of a given name can only be declared be once (which an interesting difference between JSON and XML - XML lacks any syntax for declaring arrays, so you must be able to declare multiple sub-elements with the same name).
as can be seen to the comments to to my comment, there are quite a number of ideas on the ontology of the XML format. when a single screen worth of text on my phone can hold at least 3 strong convictions on how to use a format, then it is doomed to fail.
Just as an anecdote: about two years ago I had a discussion about marshalling data structures into XML and indeed, two people managed to come up with three different schemes [0]. Of course that spells doom for something that's supposed to be used as a data-exchange format.
Children and attributes are different. Attributes are like fields to a record. Parent/child is a relation between records.
There are clear criteria for choosing attribute vs text representation. Text is for humans; all the rest is for the computer. If we see something like this:
<ssn>123</ssn>
this means the text is precious and we cannot alter it, only attach some records (‘ssn’) to some character ranges. And in most cases these records need more fields, so we add attrbiutes:
... <date date="2001-01-01">the first day of that year</date> ...
This is the usual case in markup: we need the computer to do something with the text and use the records as a guide.
But a notational case is different. With notational case we still need the computer to do something, but not about a particular piece of text. (This is actually the general case while text-handling is a specific one.) In this case we can command it directly:
<foo>
<bar id="a" />
<baz ref="a" />
</foo>
There is no text in these records, but we still use notational tools: 1) node type, 2) composition, 3) ordering, 4) naming/referencing. In this case we put everything into attributes.
We can have a notational piece with markup parts or a markup part with notational parts, but each has a clear purpose. There is also a third specific case: we want to switch to another notation and in this case we write it as text inside an XML element.
SVG is a good example. All data go into attributes, text inside elements appears only when 1) it is a part of the drawing, or 2) we are switching to another notation:
<!-- notational -->
<svg ...>
<!-- switching to CSS, still notational -->
<style>
...
</style>
<!-- markup -->
<text>... <tspan class="...">...</tspan> ...</text>
<!-- notational again -->
<rect ... />
</svg>
If you need the things that XML has, no, JSON is not simpler. It is more complex. Every attempt to embed XML/HTML into JSON has resulted in something worse than XML/HTML... and they are all different, too, which is bad.
The main problem with XML is that most people don't need what it has. The main problem with XML in the late 1990s and early 2000s is that it was jammed in many places that did not need what it had, and put a bad taste in developer's mouth as a result. It's actually a good solution for its niche, and that niche is large enough it isn't going anywhere, but it is also still only a niche. In that niche you're crazy to try to jam JSON in; out of that niche you're crazy to use XML. The mythos that XML is useless persists because the latter category is a much larger one.
> So if we are to add something to this article’s sentiment it could be an observation that we are also prone to misunderstanding things and jumping too quickly to conclusions.
I'd add another observation: most of the cynical views expressed in the article are simply the result of "hammer syndrome": hand a person a hammer and everything starts looking like a nail. Overuse of tools and trying to apply them to problems they weren't intended to solve, is a big issue.
Ignoring lessons from the past is another. I love how the author makes it sound as if NodeJS and trying to use the same ecosystem for backend and frontend was something new. "Write Once, Run Everywhere!" was a slogan that predates NodeJS by over a decade :)
Yes, one of XML's killer feature is that it can model rich text documents as much as ASTs. There's a reason why HTML never ever became JSON-ML, or why LibreOffice uses XML rather than JSON to save files.
In a way, XML can be seen as a generalization of simpler formats such as markdown (for text) and JSON (for structured data). Yes, I'm oversymplifying it.
mostly agree. not only a notation tool though - you can see some familiarity to scheme, that data is also an actionable description, and literate programming, that this actionable description is also a human readable description. together with its tooling XML still has its very own space. XSLT feels quite elegant once you get the hang of it.
In my opinion, one of the worst was things was going from servers to serverless for web apps. Vercel (formerly Zeit) made a complete switch from servers to serverless for hosting and Next.js. Everyone jumped in without realizing just how much more complicated serverless architectures are compared to servers.
1. Having your APIs as lambdas now means you can't simply connect to a Postgres/MySQL without setting up a dedicated server that serve as a connection pool. So you're now "serverless" but you now have to add a server as a connection pool. /facepalm
2. Putting your APIs on the edge means absolutely nothing if your Postgres/MySQL/Redis/3rd party services is still in one location.
3. Serverless edge databases started popping up but they were significantly more expensive (because you have to replicate data everywhere) and more complicated with limited upside.
4. Cold starts on lambda APIs meant that your web app/web site often load slower than a centralized server.
5. You can't reliably share APIs between your web app and mobile app anymore. Previously, a single server could easily be made to serve both your web app and mobile app.
6. Serverless is often more expensive and unpredictable when it comes to billing. Previously, if you paid $15 for a server, you're going to be billed for $15.
7. Serverless is significantly more complicated when you're just trying to build an MVP. There's no need to try to scale like you're Google when you're just trying to create a proof of concept.
8. You're far more likely to be vendor-locked doing serverless.
Serverless should not have been the default option for webapps. Serverless should have been the exotic option for companies that had a special scaling/edge need. Instead, servers should still be the default. The problem is that Next.js is so popular and so intertwined with Vercel that serverless is the default now.
If Vercel/Next.js were honest, they would tell everyone to use serverless only if they're making a static website. Everything else should start out as a server.
Of course, but this is just an instance of TFA's key observation:
> Geeks think they're rational beings, while they're completely influenced by buzz, marketing, and their emotions.
To which I want to add 1. that geeks young and old try also to pad their resume and eg use React and under all circumstances whether it's warranted or not (which seems at least rational from a personal career development PoV) 2. geeks think the web is about them, completely and utterly failing to understand that its entire point is easy self-publishing.
As to next.js specifically, it obviously doesn't make sense to tie serverless to React. Also, it doesn't make sense, like at all, to use React for smallish trivial internal web UIs when the main app is written in another backend language and your team has no js coding experience which just invites security nightmares and endless updates, and breaks agility and job rotation for no reason.
As to XML, there's a minor factual misunderstanding here in that DTDs, like XML itself, isn't a genuine inception, but rather a simplification and proper subset of SGML intended for the web where it has failed (even though it sees plenty use outside).
But that should't take away from the main point: that you don't go around advertising your fscking format without checking requirements. In this case, using XML (or SGML and HTML) for something that isn't document data, or conversely, using eg JSON for text documents. Just makes you look like a one trick pony.
All true (I rented a bare metal server myself for my pet projects), but it's still quite tempting.
- scale to zero, or scale to one (no cold starts)
- not having to manage disk space
- not having to manage quite a bit of security (ssh login, fail2ban, opened ports, unprivileged user)
- in some cases, out of the box - or easier - deploy pipeline
Would you feel more or less confortable taking vacations while having a server or a server less platform?
Would you feel more or less confortable undergoing a security assessment while using a server or a server less platform?
I quite agree with what you say but can't deny there are lots of non negligible advantages.
> ssh login, fail2ban, opened ports, unprivileged user
All of these things are extremely trivial to configure.
> Would you feel more or less confortable taking vacations while having a server or a server less platform?
Depends on the SLA of my provider. Most VPS providers have 3+9s these days, then the reliability concern becomes my code, same with serverless.
> Would you feel more or less confortable undergoing a security assessment while using a server or a server less platform?
It is a common misconception that serverless platforms are inherently more secure. In reality, they just have different security concerns. In my eyes, it takes a practitioner of equivalent skill to properly harden a fleet of servers Vs correctly handle the eccentricities of IAM et al in a large cloud app.
Right, I've come to believe that the software development world is led by the nose by snake oil salesmen. Most software developers don't have a range of experience outside of a CS degree, so they easily fall into "this is the new best practice" traps, especially if they look flashy and professional (examples: https://12factor.nethttps://machalliance.org)
FWIW, after using serverless technologies in both AWS and GCP for years, I think the cloud vendors have largely fixed most of the issues you're complaining about. Few examples:
1. Amazon provides things like RDS Proxy, which is essentially a hosted version of pgbouncer, and Aurora, essentially a "serverless postgres" - no need to manage your own server for connection pooling.
2. Cloud vendors have largely fixed the cold start times through a variety of means like min scaling of 1, warming requests, and functions that can process many requests simultaneously.
3. I don't get your point about not being able to share an API between web and mobile. I do this all the time.
4. When it comes to vendor lock-in, GCP for example offers Cloud Run, which will essentially just spin up any Docker container you give to it.
Managing your own servers can have a huge cost, and this can be an especially large cost for a small team. For example, does your business have any SOC 2 or other compliance requirements? Being on serverless as opposed to being on, say, your own EC2 instances, means a huge amount of work (e.g. ensuring servers are always patched, ensuring they are configured securely, ensuring you have permissions configured correctly, etc.) can be offloaded to the cloud provider, who is frankly much more likely to do it correctly than a small startup team.
If anything, I think GCP's CloudRun is awesome and the real future of serverless. In my mind it has most of the benefits of something like kubernetes with about 1/50th of the complexity.
The problem are the business agrements that they are putting in place, like with Sitecore where now Vercel/Next.js are the bless way to do CMS frontends, with .NET being left behind.
So it is either Vercel/Next.js with the out-of-the-box development experience, or DIY integration with other stacks that are also "supported".
And they aren't the only ones following such hype cycle.
Just want to comment on the first two points as a vercel fan
(1) pooling becomes the solution very early. You get massive speed benefits by keeping a connection open. Hitting connection limits is too easy without it. I feel like a push towards pooling is fine because it has great benefits for scale, speed and reliability. Services like Supabase have connection pooled postgres by default even in their free tier.
(2) nextjs server side data fetching is an amazing feature that deserves it's place in future of web development (the react team agrees since they've added server components). Grabbing data before reaching the client is amazing. If you have a centralized db all the more reason to grab as much as you can in one trip.
Other than that I can agree. I think the advice should generally be to not ever recommend a framework too early or without including its use case.
1. If you have a server, it’s trivial to setup a connection pool. For example, most Postres clients like Knex or Prisma sets a connection pool for you on your server. There’s no need to setup an external connection pool. It’s good that Supabase does it for you but most managed Postres databases don’t.
In fact, it's trivial to setup many things on a server that you simply can't on a serverless function. For example, a simple cron job or a simple persistent queue or websockets. You have to use 3rd party services for these simple tasks that you could have done on your server.
2. That has nothing to do with serverless or servers.
Your points do nothing to back up your claim that "serverless architectures are much more complicated compared to servers"
1. You still have to do polling with servers
2. Not a serverless problem
3. Not a serverless problem
4. This is a tradeoff, not complexity
5. Why not? What does serverless have to do with this?
6. Appears you're confusing serverless and FaaS. Regarldess, FaaS has incredibly predictable pricing. Previously if you paid $15 for a server, you're going to be billed for however many servers you had running on average throughout the month.
7. It's not. Your points here are trying to back it up, but have so far not done so.
8. Why would that be? And how does that factor into complexity?
Your claim is flawed for two reasons:
1. Serverless != FaaS. FaaS is serverless, serverless is not FaaS.
2. You've not taken auto-scaling into consideration on the server side.
1. Pooling is trivial to setup with servers. In fact, pretty much all clients like Knex and Prisma does connection pooling by default without an external dedicated server.
2, 3. The promise is that serverless APIs can be deployed on the edge for improved latency. The edge does not matter if your data is centralized. So you're adding complexity without the benefit of speed.
4. One of the most important aspects of any web app is load speed. Sure, you can say that serverless is better for scaling but who cares if your web pages are randomly adding 3-5 seconds to the loading time.
5. Try pointing your mobile app to Vercel's serverless APIs that are built inside Next.js. Makes no sense.
6. The most common way people use serverless for webapps is through Vercel's FaaS or some other similar FaaS. That's what I'm focusing on. Not confused. No, FaaS does not have more predictable pricing than servers.
7. It is. I just outlined why it is. Even if you want a simple cron job or a simple persistent queue, you can't do it with serverless functions.
8. You're building your entire stack on a FaaS like Vercel with Next.js. It's not as easy to switch hosts like using Docker. I don't know if you were doing development during the Zeit V1 days but one of the main criticisms of Vercel and newer versions of Next.js is now it makes it hard to switch hosts. Whereas, being able to switch hosts easily is default in a docker setup.
1. Connection pooling can be set up with dedicated servers, but it doesn't mean that all clients like Knex and Prisma do it by default without an external server. Some clients may handle connection pooling internally, while others require external setup.
2, 3. While serverless APIs deployed on the edge can improve latency, it is not accurate to say that the edge doesn't matter if your data is centralized. The edge can still provide benefits such as reduced network hops and improved response times.
4. Serverless architectures can offer advantages for scaling, but it's important to optimize load speed regardless of the architecture used. Serverless doesn't inherently add 3-5 seconds to loading times.
5. It can make sense to use Vercel's serverless APIs built inside Next.js for mobile apps, depending on the specific requirements and architecture of the application.
6. FaaS pricing can vary based on usage and provider, but it doesn't necessarily have more predictable pricing than using dedicated servers. Both options have their own pricing models and considerations.
7. While serverless functions are useful for many scenarios, there are certain tasks like cron jobs or persistent queues that may not be easily achievable with serverless functions alone. However, there are workarounds and alternative solutions available.
8.Building an entire stack on a FaaS like Vercel with Next.js may have implications for switching hosts compared to a Docker setup. Docker provides portability and flexibility, making it easier to switch hosts, whereas FaaS platforms may have specific dependencies or configurations that require adjustment when migrating.
Yea, you can. It's just not as simple. In addition, lambdas have time limits so many longer running cron jobs won't work. How about web sockets? Simple queue? Simple cache?
There are many things that you can quickly and easily do with a server that you'd have to reach for 3rd party services to do with serverless. That adds complexity and cost.
This is why I advocate for servers when you're trying to build a product for the first time and serverless if you have a special need.
That's true. On queues, websockets, etc, it might be worth checking out Cloudflare workers durable objects[0]. I've never had to use them, but I wish I had the chance, as they look like a really interesting building block.
Thanks to chatGPT there's an infinite supply of Grug wisdom on this specific topic:
Grug see tribe use XML for talk between cave wall. But XML talk too loud. XML say <message>Hello</message>. Why not just say Hello? JSON talk quiet. JSON just say "message": "Hello". Grug like quiet talk.
Grug also see XML not consistent. Sometimes XML use attribute, sometimes use element. Make Grug confused. JSON always use key-value. Grug like consistency.
Grug think XML like big, heavy rock. Hard to carry, hard to use. JSON like small, sharp tool. Easy to carry, easy to use. Grug choose JSON.
It’s good in the shallow style but that’s not exactly why grug would prefer JSON. I think mostly it’s because fewer things can go wrong, and parsers are widely available and always works. Grug would also admit that JSON lacks features (like timestamps), but that the workarounds (number as unix timestamp) works just fine.
> But above all, I learned that geeks think they are rational beings, while they are completely influenced by buzz, marketing, and their emotions. Even more so than the average person, because they believe they are less susceptible to it than normies, so they have a blind spot.
This is so spot on, so much that 9 hours after publication, it looks like I'm the first that I'm picking on it to start a thread. The author has written the essay to illustrate that point, what I see is strong denial from commenters.
I absolutely love this rant. It’s why as an application architect we get “it depends” drilled into every decision we make.
But I’d also add one of my own observations.
I believe U.S. based companies tend to look for scapegoats when building software, so they lean into packages and products.
Outside the U.S., you see much more raw architecture and principle-based design, believing that if you craft a solution based on the actual needs of the business, your outcomes will be significantly better.
In my previous company we had - back in the day - a newly hired junior developer that fully subscribed to that particular XML hype.
We had our uses for XML - we liked it especially for API endpoints and import/export files we exchanged with third party companies. Reason being, you could create a Schema Definition, send it to the other company, and tell them not to bother you, until their data validated against that XSD. We used XML for stuff like that extensively.
But that one junior was like "You are using databases? How silly of you! Just use an XML file and x-path. There's nothing a database can do, that XML can't! You really should switch over your production frontend to xml immediately!"
We gave him a little side project and allowed him to write that using xml instead of a database, which he did. When his results turned out to be pretty much unusable, and did not convince us to switch everything to xml, he left the company to find a more "modern" workplace...
To this day I'm not sure if he actually truly believed that xml/xpath would soon replace all databases - or if he just didn't want to learn SQL (and hide the fact that he was pretty bad at it).
"At this stage, so much time and money were obliterated the cloud felt like a savior: they will do all that for you, for a fee."
I think "the cloud" and its cohort of friends did a lot to increase the mindshare of the gratuitously complicated ways to do things the article reviews.
For example, there is scale-shaming: if you don't want to invest in allowing your todo list or your ERP for small businesses to scale to infinite users, you are 1) nobody on a business plane, not even ambitious; 2) ignorant, lazy and behind the times on a technical plane.
Even the AJAX technology was initially sold as "Asynchronous JavaScript and XML". Today, it's most common to send AJAX requests from frontends and most of them have nothing to do with XML data, but funny how that acronym still stays till today!
The original XMLHttpRequest introduced in Microsoft's IE was intended to transport XML messages from a backend to a frontend, since JSON didn't exist at the time. Realistically, originally it was just used to directly inject blobs of html into the page.
Ajax just referred to utilizing that feature along with JavaScript's async functionality to create dynamic webpages. It wasn't really a technology, and I've very rarely heard the term post-2010 (same as how "DHTML" died around 2005).
I had the equivalent of XMLHttpRequest LONG before XMLHttpRequest existed, back in the IE4 days. I did it by creating a tiny iframe with the target URL and then retrieving its contents. Never mind that I could also read users' hard drives, but I had AJAX long before AJAX was launched.
Unfortunately there was a period of time when Microsoft patched the security issues with iframes and that was no longer possible, but it was fun for a while to have a taste of the future.
I product I developed in the early 2000's used that technique for a dynamic tree control. That product wasn't retired until 3-4 years ago. It still worked as intended in modern browsers but you could, in theory, fire up Netscape 4.0 in 2019 and that dynamic tree would still work.
iframes didn't become common across browsers much earlier than XMLHttpRequest from what I recall - maybe a year or so? But I certainly remember playing similar tricks to get smoother page updates, in some cases even using actual FRAME elements (are there any pages at all these days still using those? They appear to be deprecated anyway.)
XML underpins of a lot of standards that are ubiquitous in their areas, eg, RSS, XMPP, GPX/TCX. Tons of government APIs still use XML. In the EU, in the financial sector at least, pretty much all regulatory reporting uses XML. And when the regulators decide to make some of that data available to the public, it typically uses XML as well. Companies in many countries are required to tag their financial statements using XBRL, based on XML.
XML has not failed. It has just achieved the status, coveted amongst serious technologies, of being boring.
> The cloud is often just as complicated as running things yourself, and it's usually ridiculously more expensive.
This was obvious to anyone who was willing to do the maths from the beginning. One big reason for the success of the cloud that no one has talked about is that developers often have a disdainful view of the IT department and the operation people. So pushing DevOps, let them go around those people and have the instant gratification of getting all the computing resources they want when they want it immediately without going through the IT department and operation people.
Ironically, some of those developers are starting to get sick of managing the operations and want to just go back to focusing on development. Thus, the pendulum swings back.
I really love this review of the past 20 years. It gets to the heart of the hype/FOMO cycle that's driven so many of these stacks to short-lived superstardom, and then quickly into irreversible technical debt. I have never really understood this. My favorite comment I ever got on a coding board when I was debating the pros and cons of building my next project in the trendy platform of the day was
I wouldn't call XML entirely a hype or FOMO, it has become part of what we call the modern web even today.
1. The very language that websites are written in today (HTML5/XHTML) are nothing but XML dialects.
2. React's JXS language which many sites use today is also an XML dialect.
3. RSS and ATOM standards used to publish your blog or website feeds are XML standards.
4. JSON replaced XML only as an efficient transport medium, it's not a substitute for XML entirely. XML nodes have attributes which is an additional layer in describing your data. As your data structure gets more and more complex, XML becomes a better way of defining it.
By contrast, many JS frameworks which came after XML like backbone, knockout, angular.js (1.x), etc. are eating dust today and will probably never see the light of day again.
Rss/Atom feels like a case of Querty keyboard. https://www.jsonfeed.org/ would be a better choice, but only a few readers implement it, and a lot of frameworks has Rss/Atom by default with a click.
As you explain in (4), Xml makes sense for complex data, but feeds are basically in their final version.
I really find it interesting that some people consider working with xml such a pain that they develop a whole new standard with the sole raison d'être being "It's not xml"
I wasn't speaking specifically about XML. If anything, XML is a poor metaphor for the 2010s hype cycle, as it was a much broader early vision to unify storage and web services than what existed at the time, and garnered a lot of industry backing as a potential standard. The fact that we ended up with JSON as a standard really seems like an historical anomaly. But who cares, I used to deal with SOAP, and AMF (which passed actual types, sometimes)... I'm just happy there's a standard, even if it sort of sucks. XML was bound to be a bad standard because it was too eXtensible, but those were the early days of the web.
On the other hand, JSX is a fucking abomination on its face. All it does is flip the ugly paradigm of writing inline javascript calls inside your HTML, so now you write your HTML inside JS. It will die in a few years, and so will React just like Angular, because tightly coupling logic to display is nifty but it always exponentially increases technical debt.
I wouldn't call JSX an XML dialect. It's JS with the ability to use HTML tags (or things similar to HTML tags, given the class → className rename) as rvalues. The JSX HTML tags aren't valid XML, e.g. because of braces (<a href={url}>). Even if you don't use any attributes, an XML parser won't do anything useful to a JSX document.
While there are much better parsers by now XML has some major problems (in no specific order):
- it's quite complicated with a lot of niche issues many parsers don't handle nicely
- it has serious issues when it comes to semantic vs. non-semantic white spaces, this happens to not matter for some applications does does matter a lot for others
- it is in a uncanny valley between being good for human writing and good for machine consumption, for most use-cases there are objectively better formats
- people did ran into endless problems with it. Some at it's fault many more not but still associated with it. This gave it a negative image outlasting hype cycles.
- it needs to compete with JSON, which in many use-cases fails by default no matter if it's good choice or not
- compared to some alternatives there are often way too many ways to encode the same thing, this is also an issue e.g. for JSON, but way way worse for XML
- non us-ascii string encoding..., it's old enough for there to be a bunch of legacy messiness, often you can ignore it but not always
Now there are use-cases where XML was and is used with successful and I don't see that changing, but a resurgence of XML in other areas would IMHO be a failure of the IT industry to not learn from past mistakes. For data serialization (even if human readable) you want native support for maps and fully encoded strings (e.g. JSON), for configurations you most times want something more simple (sadly YAML fails this due to some subtle issues). For huge datasets you probably want something more compact then XML. Still there are some good use-cases for XML where it's many machine processed, but not only, not very little amount of data but not a huge amount, need for a lot of annotations, need to coordinate changing schemas of "cold stored" files of that format between different companies, mainly used for encoding in a file, not between life communicating servers. I.e. UI encodings without a custom language, some but not al cases of scientific data, dumps of complicated configurations normally never writing by humans, but sometimes inspected by them.
But please never again tightly couple it with security schemas, especially the way strings are handled in XML makes that a terrible choice for such use cases. And it's also not a grate choice for any life communication between programs.
For 39. replace "launch vehicle" with "technology".
In any case after over a decade of "web development" (as it was still called back when I started out) my position is that to an extent we need this sort of circus because it's the only way for businesses to invest into moving this field forward.
Take for example the titular XML: largely hot garbage in applications where we learned to use JSON now, pretty damn solid as a text representation of file formats, 5024 pages of OOXML's specification notwithstanding. The author points that out as well.
If you want less of that go work in a financial institution, where you'll find unironically rock-solid Java 8(or perhaps even 11 nowadays) and associated frameworks.
> So we watched beginners put their data with no schema, no consistency, and broken validation in a big bag of blobs. The projects fail in mass.
source? although I was also a mongo hater at the time it was being hyped, I've yet to see anything concrete that would show this technology choice made a company more likely to fail.
over time, I've actually come to believe that tech choice is one of the least important decisions in terms of impact on a company's market success. which makes me not care so much about fads either way.
I worked on two different projects that were going fine until someone decided we needed Mongo. The bugs piled up and the projects were dying when I left
Every non-startup wastes incredible amounts on fads and other inefficiencies while still lasting perfectly fine. It's just business as usual. I could probably save my current largest clients many millions/year on dev if the CTO (& his tech leads) wasn't such a fad guy; he basically pops on Twitter every morning and starts sending over 'new stuff the team should look at'. There are 1000s of applications in that company and there is an incredible rot-fighting effort because of the use of the latest and the greatest frameworks. But millions/year is chump-change for this company so it's not happening any time soon. He does deliver, so the perception is that he gets things done; that entire teams are busy just updating 1 year old code dependencies for 0 business benefit is taken for granted.
If it's chump change, then it isn't really that much waste. Majority of the funds are going to proven successful methods, and the chump change goes on future bets on what might take off.
I don't see anything wrong with it. This is basically how evolution works as well. Take 99% of a proven thing, randomize 1% and then either keep the change or discard it later as not useful.
A lot of things which are now proven successful ideas were once indistinguishable from fads.
I absolutely get the sentiment about things breaking every 2 months and how hype dominated where cool heads should have prevailed… Yet these tools allowed me to build great things over the last ten years. Only two projects I started professionally were never finished, and one was only because my son was born and I decided to be a stay at home parent. I don’t have this problem of unspeakable dead projects.
For all of the warts, this crazy ecosystem has been a productivity power house for my employers. React native is absolutely awful to build things with, but it worked. Unified code from server to web client to mobile client actually worked, and it worked well. That’s crazy.
So on one hand I get it. On the other, I feel like we’ve forgotten how much better things have gotten. Building for the web is leaps and bounds ahead of where it was, but it took a lot to get here. Yeah there are still issues and no XML wasn’t the future, but I think it was worth it.
I’m probably lucky too though, because I accidentally picked technologies with relatively good staying power. I never did the Angular 2 rewrite, for example. I was deep in React and redux and stayed there for years. I can see this being a lot worse if you didn’t get to transfer as many skills over the years.
If you want a constructive take on the problem this rant is aimed at (I'm rant happy too, so don't take that description pejoratively), read "No silver bullet" by Fred Brooks, written in 1986. The problem hasn't really changed over time. I think the past four decades have proven his point.
This is the article that highlighted the idea of accidental complexity and essential complexity. Then he points out that the hard problem is making incremental improvements to how we handle essential complexity and that there is no silver bullet. Nothing will ever give you an order of magnitude improvement.
So the corollary here is that if you want an order of magnitude improvement, you will need multiple incremental improvements. You need to make seven 10% improvements to double your efficiency and you need 24 to obtain 10x. An excellent student solves problems correctly 95% of the time, or makes a mistake 1 time in 20. So that excellent student who becomes an engineer may make a mistake 1 time in 20. Not everyone will be so accurate, many of the fads which have promise may be dead ends as well: it might not be possible to increment on them.
The brutal truth is that the "pro-dev" is aggregating all these buzz words in the application you are destined to take over. It's not him that will maintain his academic overly enthusiastic code base. Once he has completed his task he will brush some documentation off his shoulder and condescendingly assume that everyone smart and capable shall understand the produce of his loins.
heed the buzz-words. simplify your solution and beware developers incapable of cooperating with the entire team. Predictability and long time consistency equals professionality.
I recently had a project where I was munging some vendor xml so I get to have an opinion(python + lxml)
xml is not that bad. until you add namespaces. then it gets ugly real quick.
But I think the fundamental problem of xml is that it is trying to be two things. a markup language and a structured data object format. It started life as a markup language(text with interposed tags) however the lure of a well defined, human readable, flexable format proved too much and people started using it for object storage(mostly tags, never mixed tags and text)
There's a few ways to interpret the "hype-train" of mainstream development
1 - it's gonna solve all my problems
2 - everyone else is talking about it so I should learn it
3 - incremental ad-hoc solution to a real problem someone encountered, may or may not apply to me
Many juniors are guilty of 1 & 2, few make it to realize that most of the time these lie in camp 3. There's a real use case for this, but you have to train your mind to not see it as a silver-bullet or worry about hype train.
The difference between academia & industry is that academia tends to develop general solutions to problems few if any people encounter. The best of the industrial programmer will take the bits of academia & try to make our ad-hoc solutions slightly more general while staying practical.
This is endless, and depending on your perspective its fascinating OR exhausting.
To me its a bit of both, but with distance & the realization that I do not need to care about virtually any new tech, I usually lean to the fascinating side.
As a language designer its humbling to see the milage people get out of many half-baked solutions in the JS ecosystem, and a sign that generality isn't the whole picture.
"The only constant in life is change." Indeed, the rate of technological change seems ever-accelerating, leading us through cycles of adaptation and learning.
While this constant shift may sometimes be seen as tedious or even chaotic, it's also a reflection of our growth as an industry. Each new tool or practice arises from the shortcomings of the previous ones. Serverless wasn't born in a vacuum, it was born out of the limitations of servers. Microservices weren't born out of nothing, they came from the monolithic application era's shortcomings.
The key lies in discerning between meaningful progress and trend-induced churn. Not every new tool or practice is necessarily an improvement, and even those that are may not be the right fit for every organization or use case.
On a personal level, how can one keep up? Find your anchors in the fundamentals. Concepts like abstraction, encapsulation, concurrency, state management, and others are applicable across a multitude of tech stacks and paradigms.
On the specific point of XML being for everything it was not meant to be.
It was a document format and the extensible part meant you could have an island of SVG in your XHTML and it was all still a well formed XML.
But some people ran away with the idea and extended it to uses that should have never been done in XML such as streaming.
XML has been around for long enough that a lot of people have developed a tacit understanding of it, it's not reasonable in all cases (I once went to war with an 8000 line build.xml file - fuck everything about that), but for simple cases it's an easy to parse hierarchical format that most people who need to can get a handle on.
The change I suggested is backwards compatible - probably a relatively small change to a single method in parsing.
You, on the other hand, took the pre-existing syntax and contemptuously threw it out the window - and completely ignored the advantage of widespread tacit understanding.
enum Color { green, yellow };
struct Leaf { Color color; };
std::vector<Leaf> tree = {
{ .color = green },
{ .color = yellow },
};
which has the benefit of requiring a massive compiler suite, a specific language version, but it compiles to 0 bytes because it's not used (how great!!)
Simplification from SGML was a core design objective for XML, and this kind of fairly harmless syntax options was mostly exterminated as a cognitive and implementation cost.
Decent indenting generally helps there (and siblings are often of the same type anyway, in which case it doesn't improve indistinguishability).
Beyond that syntax highlighting and marcation of the head and tail of the currently selected tree can improve things when you're manually editing a document.
This hype cycle was at the very start of my career. “Java and XML! Portable code and portable data!” was the big slogan.
I took a course on XML, DTD, XSD, XLST, JAXB, etc. It was everywhere. There were tools specifically to design XSD schemas, so that you could generate classes for them using JAXB and validate the data.
SOAP and XML Web Services were everywhere, using WSDL which was a glorified XSD. The WSDL would allow you to generate a client library and validate the data prior to making an API call. It was pretty useful and I still prefer it to REST, despite its warts.
Such a distraction though. All of the obsession with perfected static types and schemas got in the way of any productive work.
When I discovered how much more I could accomplish with a dynamic language and a SQL database I never looked back.
It (or maybe a new version of it) is the future. But people got bored and justiably frustrated in how clunky it is, because it can be and do everything. As often happens, people started abandoning it just as it was getting really interesting - w3c xlink, the hypertext/data document of our dreams, and we're not talking about some neato mostly-great thing someone came up in popular language, rather a set of precise standards with implementations in every language. But in its era, the current "get er done/yagni" startup culture was really kicking in as people realized it was a hassle and less immediately rewarding to think of the web as a standards based platform past using the browser as a client.
"In many ways, the current American presidency and XML have much in common. Both have clear lineages back to very intelligent people. Both demonstrate what happens when you give retards the tools of the intelligent."
The list is just too long now: OOP, COM, CORBA, UML, Scrum, Microservices, JSON RPC, ReactJS, writing shit in JavaScript when you don't have to, LangChain, all of it. It's just fucking stupid and wrong and frankly the software business isn't for everyone.
It doesn't make you a bad person that you're not serious about this shit.
It wouldn't shock me to find that a ton of the ideas in XML do become the future, all told. We railed against it being overly complicated, but then it seems we have slowly walked back down that path as the years go on.
All development frameworks and new tech are about reducing the amount of labor required to produce outcomes. This can mean reducing the amount of up-front labor required (running JS on the frontend and backend means that junior devs only need to know one language), or reducing the amount of refactoring required later on (more structured SPA frameworks like angular turn everything you want to do into a matter of boilerplate CRUD). In both cases, it's about having a uniform pattern for anything you want to do so that onboarding new people to the project is easier. Want to get some blog posts for this view? Inject the IBlogService. Want to get a list of all authors? Inject the IAuthorService. Without having read the docs, you can starting typing "I{ThingIWant}Service" and your IDE will pop up whatever services are available.
I see a lot of old school developers complaining about the abstraction of frameworks and how it was simpler to code things in CGI, PHP, C, etc. when you were closer to the machine, or at least were managing the state of the program manually and not through some framework which uses reflection and @Decorators/Annotations everywhere to declare things. But such systems mean that whenever you bring some new junior dev on, they have to spend 6 months learning how your system works, being totally useless to the company in the meantime (especially since college doesn't seem to teach useful job skills anymore).
As it is now, even though there's a million different fad frameworks out there, they're mostly all about the same. I can easily transition between nestjs, .net mvc, or spring boot because they all imitate spring. Frontend frameworks are usually about the same too, and sometimes the frontend is shockingly similar to the backend (like Angular).
I'd much rather work with the banality of angular/react CRUD/boilerplate than have to write another line of unframeworked PHP, or manually write another Apache2 routing config file ever again. That's not to mention operations stuff: CI/CD, despite all of its attempts to boil all possible program states and logic down into infinitely complex declarative yaml files and containerizing everything, is better than the alternative: SSH'ing into a server, manually dealing with system library dependencies, and copying files into /var/www
Seems like there is a blending here of features and technologies. Your users will probably not know or care whether you choose YAML, XML, or JSON, but they'll notice gamification features.
> geeks think they are rational beings, while they are completely influenced by buzz, marketing, and their emotions.
Also geeks are influenced by whatever tool is most convenient. I’ve used XML knowing pretty damn well it was annoyingly verbose. But hey it came with all the tooling: a library to parse it efficiently, auto formatting in my editor (with folding and auto-correcting), all the docs for graduates who didn’t know it. And there was nothing else!
The same is true of : Electron, Pandas, etc.
Sometimes the “best” tool is just the “only” tool…
True but unavoidable for an industry as young as ours. We'll explore every nook and cranny for possible solutions before settling on slowly evolving proven tools. And that will take generations. More than 10 IMHO, so by the time the complaints from the article will be obsolete our bones will also be dust.
We're in the gold rush phase of IT and will endure everything that comes with it: hype cycles, snake-oil, depressions, shady and downright criminal activity, lots of stress, etc.
Genuinely curious, not being sarcastic or cute here. With hype everywhere, and job ads filled with buzzwords and acronyms, how does one choose what to learn? Something that is sane, that is joyful to work with, that will be around for at least a few years, but also puts food on the table?
I suppose SQL is going to be around for a while? Any other recommendations?
If you're a software developer, cultivate deep expertise in one or two popular languages that are likely to stick around. Everything else - the hot new web framework or whatever - is downstream from the actual language, and is easy enough to pick up.
If you're in IT operations I guess it's harder to keep up, because everything is different.
Wondering why web apps that do genuinely need to be react based or similar because of complex interactivity aren't being built as cross platform desktop apps using something like GTK. Performance would be much better, there would be a rich set of ready to use GUI components to use etc
I think,
Python Dictionary format can represent structured data well.
i.e., I need to re-present some data, I use
Python dictionary format, then YAML, then JSON.
Possibly never in XML as I feel the tags simply redundant, verbose, prone to missing when we are constructing with hands.
It's interesting to consider the hype around XML and MongoDB. But it's also crucial to remember that each tool has its pros and cons and we need to choose wisely based on our project's needs. Hype isn't always a reliable indicator of utility ...
it is easy to paint XML as anachronistic, and as crude serialization format it certainly is. but as many analogies, that only goes so far. there is still the excellent tooling (XSLT, DTD, schematron..) and the fact that there are elaborate data modeling use cases still using it because there is nothing that comes close to that combination of expressivity and tooling. take for example the TEI: https://tei-c.org you could do it with OWL or UML of course, but would you, really? in some ways we haven't really explored the possibilities of data modeling. there is things that json+some schema will never be able to do.
I started coding in C++ back in the 90s. The secret is, if you wait long enough, things become popular again. Thanks to webassembly, I can code fast web apps in C++. And of course webassembly is just a hybrid of a java like runtime + ActiveX.
Great, if cynical article. If you have a company that chases fads, you need to work for a better company. Find one where the older developers aren't 30-something. Usually, gray hair brings a certain immunity to fads.
Fantastic article. I was on the isomorphic JavaScript hype train for a couple of years with Meteor.js and that really burned me. Now I can't stand JavaScript and I like Elixir and Python.
Doing any kind of development apart from web, is bliss.
It doesn’t even matter what other genre of software dev it is. Embedded, mobile apps, desktop, PLC, HPC, … just so long as you stay out of the whole front end/backend/cloud/browser nightmare, these cycles are much less noticeable.
I'll give just one obvious one: XML is far more resilient against syntax errors. Miss a closing tag in XML and a clever parser might even be able to infer it. With JSON, you know you're missing a brace at the end, but you have no idea where it should go.
What a cracking article. I nodded so much my neck hurts.
I'm 50 and although by HN standards, not a proper nerd (I'm a ..."web generalist?" - more project manager who can dabble in code than anything ey) and I've seen all this stuff come and go for 30+ years in the web industry.
The thing I've always done is stuck to the core trio of technologies: html, CSS and a smattering of JS when you need it. Then it's PHP for serverside.
For a very large proportion of cases, this is perfect. It's solid, known, reliable and built to last.
Granted, I build websites and not web apps. But for me this highlights the oft-quoted truism that you use the right set of tools for the job. There's no reason for me to go all SPA on my builds because that's not what I need. Ditto with React or Kubernetes or Docker. It's just adding complexity in my scenario that isn't needed.
Then there's another side which I have in spades same as every other nerd - curiosity. So yeh, I've fiddled with htmx and docker and microservices but because I'm interested in finding out more.
The crucial point comes when these two parts collide. And then really the only way is to ask - boringly - "is this the right tool for the job?", when I take in all the factors: sustainability, ease of deployment, documentation, etc.
So yeh, beware the bling of new tech blindness. Most of the time it's the boring old stuff that is best suited for the job. There's a reason after all why these technologies are still around...
Yeah, I've been in this game for 40+ years. Fads come and go. When I started 4GLs were going to take over everything. 10 years later it was SDDs and SDSs, then it was UML and it goes on to what's mentioned in the article.
Perhaps it is the triumph of hope over experience, perhaps those new in the field prefer to use the new tools rather than understand how the old ones work.
I think the quality of software has deteriorated, but then complexity has also gone up. One never writes anything from scratch these days, there's a tower of libraries underneath now.
This is a pretty low-grade reactionary ism. Usually without saying so directly the effort seems to be to lambast, to shame & berate. But it does spend a good three paragraphs catastophizing & melting down over Javascript & left-pad.
> We were told to stay on top of the most modern ecosystem, and by that I mean dealing with compatibility being broken every two months.
Some people just want to be ornery & cantankerous.
The article can be not wrong about saying there's too much hype, marketing, and rationalizing that we know real answers. Fools with certainty are everywhere. But it can still be wrong & have a jerk attitude. That we still have to figure things out & can be wrong or overadopt is not some horrible flaw. It doesn't mean we can skewer all these ideas & crap on them like this cavalcade of schlock.
So many of these various ideas do keep reoccurring, keep having real use. XML is reborn as jsonld which has some great uses. Isomorphic JS is alive & well in the new server side react craze. GRPC and Kubernetes and dozens of these other techs provide a normative baseline that let many teams work together where before we had anarchy & disorder.
There's some good messages & ideas here. We do need to iterate & refine & assess value. But also this is a conservative paean. A back to the land, just simple code ideology that is so appealing, that let's folks look down their nose & reject assessing technology fairly, that doesn't allow for valid complexity. Rakishly anti-elitist, thinking itself better for being such a smart source of truth only because it can smash other works so effectively. Be more liberal & open minded, not so fixed: accept that most of these things have some really good morsels or kernels that have value
Modern software is a couple decades old. Of course we are still refining & learning. Most of the things here have provided real value & informed where we go next, happened for real reasons. Some will fade, but many of the ideas will reoccur, will be better more creatively worked in or worked together in more cohesive wholes. This is a great amazing process, we happening live, in a young industry. Absolutely, beware of hype & bandwagons & marketing, of anyone who would tell you this is the way. For we don't know, and we have a long road ahead with many unexpected turns to figure out where computing's truest values lie. Stay young, stay hungry, and stay a bit foolish.
But when I started working in the industry, I realized that it's absolutely exhausting. Hype after hype, fad after fad, modern after modern, refactor after refactor. I have a workflow, I know how to build apps. Then one day director of Ops comes and completely and utterly changes the workflow. Ok fine, I'm young, will learn this. Month passes, it is now Terraform. Ok fine I'm young, will learn this. Now we're serverless. Ok fine, will learn. Now everything is container. Ok. Now everything microservice. K. Now turns out lambdas aren't good, so everything is ECS. OK will rewrite everything...
Look I'm not even complaining. But it feels like I'm stuck in a Franz Kafka novel. We just keep changing and changing the same things again and again because that's the new way to do. Big distraction. Destroys your workflow. Forget about all the util scripts you wrote last 6 months being useless.
I don't even know how I would do it. Maybe I would do this the same way if I had any power. But that doesn't change the fact that it's a bit ridiculous. Fun but tiring. Entertaining but exhausting. Cute but frustrating.