It's actually more complicated. With HTTP you don't know requests can be coalesced until you receive the response headers with the Cache-Control and Vary. So if your website takes a few seconds to respond most CDNs will send every single request in that period through.
In theory a CDN could optimistically coalesce requests then re-send them when the headers of the first one return. But this is very complex and rarely done in practice.
This can also occur on any time the cache gets stale and needs to be refetched.
> most CDNs will send every single request in that period through.
I don't think this is true. It certainly isn't for any CDN that I've worked for or on.
Cloudflare don't do this either - they use a cache lock - the first request basically acts as a blocker for all the others, leaving the other requests waiting for the response (if it's cacheable they serve that response, if not then they proceed to origin).
It's normally configurable, but most sane CDNs do have it enabled by default, precisely because big bursts tend to be sharp in nature and a cache miss can be origin breaking at that point.
Just for completeness's sake, Nginx's HTTP proxy module can do it too (the setting's proxy_cache_lock) though it is off by default there.
In theory a CDN could optimistically coalesce requests then re-send them when the headers of the first one return. But this is very complex and rarely done in practice.
This can also occur on any time the cache gets stale and needs to be refetched.