Yes, SKIP LOCKED is great. In practice you nearly always want LIMIT, which the article did not mention. Be careful if your selection spans multiple tables: only the relations you explicitly lock are protected (see SELECT … FOR UPDATE OF t1, t2). ORDER BY matters because it controls fairness and retry behaviour. Also watch ANALYZE: autoanalyze only runs once the dead to live tuple threshold is crossed, and on large or append heavy tables with lots of old rows this can lag, leading to poor plans and bad SKIP LOCKED performance. Finally, think about deletion and lifecycle: deleting on success, scheduled cleanup (consider pg_cron), or partitioning old data all help keep it efficient.
I genuinely like the framing of advisory versus authoritative AI, and I agree with the core observation that authority, when it is genuinely granted, is what unlocks step change improvements rather than marginal efficiency gains. In the environments where it is appropriate, allowing systems to act rather than merely suggest can dramatically accelerate development and reshape workflows in ways that advisory tools never will. In that sense, you are right: authority is the AI bottleneck.
My concern with your article is that, without clearer caveats, you imply that authority is the right answer everywhere. As you rightly note, AI systems make mistakes and they make them frequently. In many real world contexts, those mistakes are not cleanly reversible. You cannot roll back a data leak. You cannot always recover fully from data loss. You cannot always undo millions of pounds of lost or refunded revenue caused by subtle failures or downtime. You cannot always roll back the consequences of an exploited security vulnerability. And you certainly cannot reliably undo reputational damage once trust has been lost.
Even in cases where you can mostly recover from a failure, you cannot recover the organisational and human disruption it causes. A recent UK example is the case where thousands of drivers were wrongly fined for speeding due to a system error that persisted from 2021. Given the scale, some will have lost their licences, some may have lost their jobs, and many will have experienced long term impacts such as higher insurance premiums. Even if fines are refunded or records corrected later, the downstream consequences cannot simply be undone. While the failure in this example was caused by human error, the fact that some mistakes are unrecoverable is just as true for AI.
Part of the current polarisation in opinions about AI comes from a lack of explicit context. People talk past each other because they are optimising for different objectives in different environments, but argue as if they are discussing the same problem. An approach that is transformative in a low risk internal system can be reckless in a public, regulated, or security sensitive one.
Where I strongly agree with you is that authoritative AI can be extremely powerful in the right domains. Proofs of concept are an obvious example, where speed of learning matters more than correctness and the blast radius is intentionally small. Many internal or back office applications fall into the same category. However, for many public facing, safety critical, or highly regulated systems, authority is not simply a cultural or organisational choice. It is a hard constraint shaped by risk, liability, regulation, and irreversibility. In those contexts, using AI in a strictly advisory capacity may be a bottleneck, but it is also a deliberate and necessary control measure, at least for now.
I've tried this approach twice with mixed success. In both cases I wanted to stub out the persistence tier of a Node.js application when test driving the API. I verified the real and fake implementations by running the same tests against them. The first application was quite small, and the process worked well, although it did feel somewhat onerous to implement the fake. However, the second application was more complex, calling for some PostgreSQL specific features which were difficult to implement in the fake. In the end I abandoned the approach for the second application, settling on slower, duplication heavy API tests which used the real persistence tier. I'd love to hear how others solve this problem without mocks, which I agree tend towards brittle, tightly coupled tests with a poorer signal to noise ratio.
The best solution doesn't need anything, e.g. if we have calls to our persistence layer mixed in with calculations, the latter shouldn't be tested with fakes/mocks; instead we should refactor the code so the calculations don't depend on the persistence layer.
If the behaviour of some code depends inextricably on some external system, then our tests should use that system. This avoids unnecessary code/work/duplication, allows tests to exercise more paths through our codebase, exposes problems in our understanding of that system, etc.
If calling an external system in our tests is dangerous, unacceptably slow, costly (if it's some Web service), etc. then we should make a fake implementation to test against. If our code only uses a small part of some large, complicated dependency, we might want to define an interface for the subset that we need, refactor our code to use that, and only have our fake implement that part.
If a fake implementation isn't practical, or would require so much overriding per-test as to be useless on its own, then I might consider using mocks. I would seriously consider whether that code could be refactored to avoid such coupling. I would also never make assertions about irrelevant details, like whether a particular method was called, or what order things are run in.
Also dyslexic and also used to mirror write perfectly, but without knowing I was doing it. Was in the late 70s, prior to dyslexia being widely understood and my teachers had no idea what to do.
me too (~1976) they knew it was something that was not stupid or lazy, but had no clue what was going on. However, most teachers treated me like I was a retard -- Yes ,I am using that word on purpose as it was used for me.
What a very strange article. TES were using docker in production in October 2014. It was certainly possible to run docker containers (docker run -d) in the background and to view them (docker ps) as early as October 2013 when we first evaluated it at the Daily Mail.
Thanks for you're comment - we agree that the deployment of micro-services is more complicated, although we disagree it has to be a nightmare. Not wanting to jump the gun on my next blog post too much, the deployment solution for Campaign Manager was:
1. A co-ordination project, capable of setting up the development environment, starting services, running tests, building artefacts and deploying them. This sounds grand, but it was really a set of scripts (JavaScript), that relied on a consistent naming convention.
2. A file defining which services were deployed to which host per environment
3. A shared list of endpoints for service to service communication, datastores and the ESB.
The build scripts used the CI build number to version the artefacts and created an SMF manifest. The deploy scripts (also written in javascript) iterated over all hosts and performed the following steps...
1. Put the host into maintenance mode, causing the load balancer to remove it from the pool
2. Upload the new / updated service artefacts (tar.gz)
3. Stop services not supposed to be running on the host
4. Install the new / updated service artefacts (i.e. unzip them)
5. Import the SMF manifest (similar to an init.d script)
6. Take host out of maintenance mode
7. Delete services, retaining the last 5 versions
This is not vastly different to what I would have done if deploying a single application to multiple hosts. We didn't have any need for orchestration as the significant service-to-service communication was via asynchronous messaging. We've improved on this in other projects, e.g. deploying docker containers instead of tar.gz files, using AWS tags instead of a file to define where services are deployed and even adding a service dependency graph so that when one service becomes unavailable the dependent services can take appropriate action.
Will write all this up in more detail and post to HN.