Developer's dont write tests if writing tests is hard. Simple as that. If writing tests is hard because you never invested in setting up a good test infrastructure with helpful utilities, you fucked up. If writing tests is hard because your architecture is a cluster fuck of mixed responsibilities, you fucked up.
This is were good tech leadership matters. Leadership must push back on product to make room too build out test infrastructure. Otherwise you'll see individual engineers who do the right thing get punished for it because get aren't delivering tickets.
Word. I work for a reasonably large online retailer that fancies itself a "tech company" and you'd be amazed at the pushback I kept getting for insisting on a proper test infrastructure for their mobile apps - dependency injection for mock API endpoints with captured data, fuzzing to capture UI weaknesses and the like. But no, they need to go fast because... well, because their product/market fit isn't that stable and they're using "data driven" decision making to justify chasing one rabbit after another to try and juice the stock price.
It's also really hard if you don't do it right from day 1. You'll be spending substantial developer time to put things in place and achieve minimal coverage which means the positive results might not materialize till much later. Worse, the positive result might just be slowing down the rot and you'll never be credited by much of the org for things not having gotten worse. Even worse, if you get replaced by a yes-man who stops enforcing testing and clean up, a temporary "boost" will happen. It's all totally broken and it's honestly burning me out about this industry.
Some things cannot be tested. I work in distributed systems a lot and you can unit/integration test simple functionality, but there is nothing which can test how your system will behave in prod outside of just trying it out.
I find having really good metrics and a tight development cycle allows for quickly iterating on distributed systems problems. Obviously the best situation is to have all of the above: unit tests, integration tests, and a tight development cycle in prod.
If I had to pick one because I am time constrained, I would choose testing in prod with good metrics which is maybe what the article is getting at.
You can get quite close to reality with really good integration tests. There's all sorts of real life scenarios ive hit like flaky networks, email appearance in various clients, etc. that can be integration tested with a bit of creativity but most people wouldnt even think to do it.
The investment in this stuff can be quite high...unless you've got premade tooling for all of this that you can just drop in.
Yes, unless you are doing a clean room implementation of paxos or the raft protocol it probably isn’t worth the effort to create harnesses to simulate packet loss, thundering herds, split brain, out of order responses, etc. Even then, if you are writing some distributed synchronization primitives you might be better off with formal proofs than some sort of test harness.
To give an example: You can unit test your OBD response parser all you like, at some point you have to actually get in the car and see what gibberish the OBD adapter spews at your app, what timeouts it needs, how reliable it is, etc
Writing tests, if you have to mock everything each time because you can't duplicate the production, isn't really hard. It's long and boring.
If I have to ballpark an estimate on a tool/project, I say upfront something like: it'll take me 6 days olus a day for very limited tests, so 7. Twice as much with almost complete coverage. But each time we'll have to improve/refractor, a bit of that time will be recovered.
Basically I ask how important the new project will be,without asking that (because 99% of the time the response is 'very').
I'm realizing I just say I was managing my managers. Should I ask for senior devops position now? :)
I was unclear, I apologize. The 'quick testing' isn't really testing imho. (basically no unit test, small integration tests with the data I could steal from the prod DB). It takes around 20% of the time (or 1/6 in my example taken from my last project). The 80% coverage that include mocking and unit test double that time.
I am always extremely clear with my estimates being just that, estimates, and that a complete testing double dev time. Most of the time I'm told 'we don't care right now', but on some projects, management accept longer dev time for more stability and less bugs (we build internal tools).
This is absolutely it. I've worked on multiple projects simultaneously where I wrote great tests on one project and few to no tests on the other. It wasn't the developer that changed, it was that one codebase was designed with testing in mind, while the other was not and was therefore extremely cumbersome to write tests for. That also meant that in the second codebase whole UI flows had no tests, so if I made one tweak to the UI flow I wasn't going to spend a week figuring out how to test it and everything around it.
The worst thing is how often developers almost fatalistically accept that testing sucks. I don't blame them though because improving the test infrastructure has little short-term business value or so they say.
You could spend your entire life improving test infrastructure. There's clearly a cut off point where the investment stops making sense but it's hard to know when.
The investment calculation is quite complex and many of the variables require guesses. A lot of returns on automation work are not positive.
The irony is that is absolutely does have business value. Unfortunately, it's easier to quantify the problems you've had on prod than the problems you've prevented on prod, so people tend to measure against the former.