I agree with the parent poster that it's probably more about inference, not training.
If ML developers can assume that consumer machines (at least "proper consumer machines, like those made by Apple") will have support to do small-scale ML calculations efficiently, then that enables including various ML-based thingies in random consumer apps.
If ML developers can assume that consumer machines (at least "proper consumer machines, like those made by Apple") will have support to do small-scale ML calculations efficiently, then that enables including various ML-based thingies in random consumer apps.