I'm not understanding why Starlink is limited in terms of how many cells it can serve by beam hopping. What's driving the analyst's choices for the different "TDM split" and "beam spread" values? I naively expected the system to have much more granularity than 10% TDM splits.
Beam hopping adds latency and jitter because packets have to wait for the beam to become available. For example if they use a ~1 ms superframe and they want to limit jitter to 20 ms they couldn't split more than 20:1.
Beam hopping also cuts available throughput. If they're promising 100 Mbps they can't slice things too finely.
Thanks for the explanation. It raises the next question about how small a superframe can be, which I imagine as they get smaller makes efficient scheduling a more difficult problem.