> Describe to me how an IT system can produce results (e.g. tickets closed, if you wish) at a rate higher than the processing rate of the slowest component.
It can't, but the slowest component can be perfectly optimized and thus not be a bottleneck. You would fail to find a real bottleneck in this case, since you are just looking for the slowest one, hence I have proven that your statement above was false, there are cases where the optimal strategy is not to just look at the slowest component.
If you have some other definition for bottleneck we can continue, but this "the slowest component" is not a good definition.
No, what you've done is you've failed to find a way to improve the system's behavior further. If you have the slowest component and you can't make it faster, then congrats: you cannot make the system faster.
You cannot build cars faster than you can mine metal, nor faster than you can put stickers on the windows on their way out the factory. You are done optimizing.
That is only true for parallel executions, not serial ones. If a process requires executing many components serially (which happens a ton), then it isn't enough to just look at the slowest component.
Anyway, thanks, now we know that you only considered throughput in a factory like setting, there it is true. But your rule there isn't true for software systems in general, optimizing latency speeds and serial performance is extremely common in software.
Edit: Example:
Machine A takes 1 hour -> B 2 hours : System takes 3 hours.
Machine A takes 0.5 hour -> B 2 hours : System takes 2.5 hours, so faster even though we optimized the faster component.
The example I gave was a serial process. And in fact, every parallel process is just a group of serial processes. Factories are obviously linear but in reality every single process is linear through time, including the chaotic, complex ones you see in IT or software orgs. (Unless you have a time machine, in which case ignore me)
Fastest/slowest doesn't mean "takes the longest in clock time," it means "has the lowest throughput."
In your example, if B is only able to produce something every 2 hours, then no, speeding up A will not increase the throughput. You will see a larger backlog of jobs from A waiting for B to become available. Ultimately only 0.5 units per hour will be produced by this process.
If B is able to produce more than something every 2 hours, e.g. it can produce multiple things in parallel, then yes, speeding up A will increase throughput. But that is only because B wasn't the bottleneck to begin with! Your own failure to serialize that parallel process hid that fact from you.
Either of your systems (unless you have invisible parallelism in B) will produce 0.5 units per hour.
If you're saying to yourself, "well this is a process that runs only once per day, so there's no backlog anywhere in here," then congrats: you've just discovered that the true constraint sits upstream of A!
Speeding this up might be a nice quality of life improvement for the people involved, but it will not yield different outcomes for the system as a whole, because there's not enough work coming into A to consume the capacity of B anyway.
Your entire argument is about throughput in sequential pipelines or parallel systems. On systems not shared with other independent tasks.
Yes, in those simple cases there is one bottleneck at any given time.
But most tasks do not fit those conditions, and economically and technically are best optimized reflecting other objectives than just throughput.
Many times parallel or sequential pipelined groups of tasks have optional subtasks, so there will be as many bottlenecks as there are components that may run while others don’t.
Many tasks run intermittently, or for resource reasons, need to run as an uninterrupted sequence, with no opportunity to pipeline, and are optimized for latency. Which means any speed up of any subtask has value.
Many systems run multiple independent tasks to maximize return on resource costs, and so tasks are optimized to minimize active computational time. And any speed up of anything can improve that.
In all those cases, multiple modules can be usefully optimized at any given time.
And many factors can be relevant to making that choice. Such as relative benefit of optimization vs. time to achieve the optimization, cost of making the optimization, and project risk, all come into play.
In practical reality, there is a long tail of such factors. For instance, the skill and interest levels of available developers, relative to the module optimization options.
It can't, but the slowest component can be perfectly optimized and thus not be a bottleneck. You would fail to find a real bottleneck in this case, since you are just looking for the slowest one, hence I have proven that your statement above was false, there are cases where the optimal strategy is not to just look at the slowest component.
If you have some other definition for bottleneck we can continue, but this "the slowest component" is not a good definition.