I think it was meant that a normal application does not utilise the CPU all the time, which can be seen by looking at the task manager CPU usage % = X. Any extra processing needed to fix this bug will have to come out of the remaining 100-X%. This is OK as long as you have enough spare %, and can afford the extra power usage for that processing.
Virtualization is one popular way to drive up CPU utilization. The more diverse workloads run on a given server, the more even the CPU usage tends to get. This way, if you have 100 workloads that peak at 100% but average at 1%, your CPU usage will tend to be smooth at 100%, any overallocation will smooth out over time (a job that would take 1 second may take up to 10).