But that's silly, it can do all that and also have a killer app.
I can already do all that traditional stuff on a macbook or ipad, and "have a screen anywhere", and those things are more portable because I can just sit it on the table instead of having to untangle A CABLE from my clothing and then find somewhere to place a headset that doesnt sit flat like an ipad or macbook.
Then there's the fact that ipads and macbooks have keyboards.
With no killer app, this is a boondoggle, I think.
I suspect that, like self driving, that last 10%, 1%, 0.1% will be both functionally essential and exponentially difficult.
Video calls work great (well once we've sorted out the eye contact issue - now there's a real problem that needs really solving[1]), even with all the ML in the world avatars will be just a pale reflection of the real thing.
[1] You need a screen that is also a composite camera array, so that software can track the eyes on the incoming video feed and place the camera for the outgoing feed at that (moving) location. Sort of like a phased array for light. Thus when you look at someone's eyes, they see you looking directly down the camera.
Looking at the WWDC video, for me it looks like it's gone through uncanny valley and reached the final climb towards realism, but it's still not quite right.
I've noticed different people have the valley in different places, so I'm not at all surprised if it creeps some people out.
I can already do all that traditional stuff on a macbook or ipad, and "have a screen anywhere", and those things are more portable because I can just sit it on the table instead of having to untangle A CABLE from my clothing and then find somewhere to place a headset that doesnt sit flat like an ipad or macbook.
Then there's the fact that ipads and macbooks have keyboards.
With no killer app, this is a boondoggle, I think.