Yes, that would be great. But without the ability for us to verify this who's to say they won't use the edge resources(your computer and electricity) to process data(your data) and then send the results to their data center? It would certainly save them a lot of money.
They already do this. It's called federated learning and its a way for them to use your data to help personalize the model for you and also (to a much lesser extent) the global model for everyone whilst still respecting your data privacy. It's not to save money, it's so they can keep your data private on device and still use ML.
When you can do all inference at the edge, you can keep it disconnected from the network if you don't trust the data handling.
I happen to think they wouldn't, simply because sending this data back to Apple in any form that they could digest it is not aligned with their current privacy-first strategies. But if they make a device that still works if it stays disconnected, the neat thing is that you can just...keep it disconnected. You don't have to trust them.
Except that's an unreasonable scenario for a smart phone. It doesn't prove that the minute the user goes online it won't be egressing data willingly or not.
I don't disagree, although when I composed my comment I had desktop/laptop in mind, as I think genuinely useful on-device smartphone-AI is a ways of yet, and who knows what company Apple will be by then.
+1 The idea that it's on device, hence it's privacy-preserving is Apple's marketing machine speaking and that doesn't fly anymore. They have to do better to convince any security and privacy expert worth their salt that their claims and guarantees can be independently verified on behalf of iOS users.
Google did some of that on Android, which means open-sourcing their on-device TEE implementation, publishing a paper about it etc.