The bot can gather this restaurant specific information over several conversations with the restaurant. This wasn't possible before. This domain isn't too wide.
You are probably right that in principle one could eventually come up with a full catalog of features of a reservation. There would be about, say, 100 of those.
I seriously doubt that they will proceed to define and collect them, since those are probably 10% or less of all reservations, but lets say they would.
Then still, the conversation you make to make the reservation is a process in which you make the decision.
Say, there is a place inside at 20:00 or a place in the garden at 20:30. Are you going to let Google choose between the two options for you?
Do you imagine there would be an api in which you specify to the assistant, before it makes a call, your preferences in that much granularity?
One method of creating adversarial examples in a "black box" setting is to create and train a local model as a stand-in for the actual model using the inputs and outputs of the actual model. [1] So, the answer is "no" but a qualified "no" since in practice this seems to work. The second part, being able to forward many images, is also a qualified "no".
Thanks. I found that when they transitioned to Hardware2/AP2 they cut out most autopilot functionality, and since then have been gradually releasing updates that bring back that functionality.
My summary of the slides: to achieve "common sense", Unsupervised Learning is the the next step. Use all information in the data for predicting everything you can including the past. Use adversarial networks to do that.