I like the fact that an experiment was done. I don't like the conclusion that batching is always bad. Yes, for this arbitrary subset of tasks, batching was slower. But if we had included walking to the nearest mailbox to the flow, batching would have won. Or if we had included printing the flyers at the beginning, batching would have won again. The steps done in the video are what I would consider one step in a larger batch process: Step 1: print X flyers. Step 2: stuff X flyers. Step 3: mail X flyers. The take home shouldn't be that batching is always bad, but that we have to experiment to find what the best mix of batching/OPF is.
The battle over batching and batch size has been going on for decades in the world of lean manufacturing. An example quote:
"American manufacturing managers traditionally considered setup costs as a necessary evil and made little or no effort to reduce them."
"The lean/JIT philosophy suggests that a firm should eliminate any reliance upon the [Economic Order Quantity] formula and seek the ideal production quantity of one. Of course, a lot size of one is not always feasible, but it is a goal used to focus attention on the concept of rapid adjustments and flexibility. Naturally, a reduction in inventory levels means an increase in setups or orders, so the responsibility rests with production to make every effort to reduce setup time and setup costs."
The key Lean insight here is to lower setup costs for tasks to as close to zero as possible. An example of that might be working to lower the costs of doing making a build from your source code. If you can get that cost close to zero, then you are much more likely to see regular code check ins.
This lean principle is tremendously useful in pushing you towards more agile development. If you can lower the batch size of your feature releases to a much smaller batch size (or even one, i.e. continuous deployment), that can have huge benefits.
As far as I understand, there is an optimum batch size that depends on the particular process. Lean manufacturing looks for smaller batch sizes, but it may not be one for a particular process.
I think "short release cycles give you wonderful things" is very on-point for startups. Switching to them has been a boon to BCC and to many of my clients who could lose BCC in the petty cash drawer.
Consider a software company which does yearly releases -- not that uncommon. Averaged over time, there is six months of developer labor which is structurally incapable of helping out customers. That's crazily wasteful. The code is written. It works (well, you know). But, for process reasons, it just can't be put into production right now. It's like having six month timelines on receivables for work already completed and accepted: egads, that's a problem.
That also means that, on average, every new feature laid has a six month air gap between when it is completed and when Real Customer Feedback starts coming in about it. The developer has already forgotten what they did and the design rationale for it.
Six months is also plenty of time for development to go totally off the rails, into the well-trod territory of producing working code which does not provide customer or business value. I've sunk months into features which didn't go anywhere. I have clients that have man-years invested into core features of new releases which were never even seen by plural percent of the user base.
By comparison, if you're releasing bi-weekly, you generally have one week between completing a feature and when you start getting feedback. You have one week for things to get off the rails prior to customer behavior measurably saying "This? Yeah, couldn't care less." You might actually remember design rationales for things you did when called to improve upon them.
The utility of this pretty much keeps increasing the lower you get your iteration times.
Fog Creek used to do yearly releases, largely as an artifact of selling software which had to be installed on 3rd party servers. Like almost everybody else, they have largely transitioned to hosted SaaS. This means they move quite a bit faster now. That has been pretty much unalloyed Good News, both for the engineers and for the business as a whole.
You can plan 100 features, implement 100 features, launch 100 features, get feedback on 100 features
OR
You can plan, implement, and launch 1 feature, get feedback on it, then move on to the next feature, and repeat 100 times.
Obviously, the feedback in the 2nd case is much more immediately useful; the point of the post was that even in a relatively simple case where you wouldn't expect nearly as much value from the feedback (stuffing envelopes), the small-batch approach still wins.
Again, I think that's too strained. In stuffing the envelopes, you can learn everything you need to know about the next 99 iterations from the first one.
In implementing features, the lessons will be different almost every time.
Timescale is also completely different. You can do 1 envelope at a time because there's no waiting involved.
It takes time to evaluate the result of implementing a feature. Not because it's so much work, but because you don't control the factor that really matters: The customer.
You have obviously never stuffed 100 envelopes if you think you learn everything you need to learn from the first one. If you attempt to be mildly efficient about it your process has continuous improvement. "If I spin this pile 90 degrees I can start the folding easier." There are things you don't realize until you've done 5 envelopes, done 10 envelopes and done 100 envelopes.