Hacker Newsnew | past | comments | ask | show | jobs | submit | sushirain's commentslogin

The bot can gather this restaurant specific information over several conversations with the restaurant. This wasn't possible before. This domain isn't too wide.


You are probably right that in principle one could eventually come up with a full catalog of features of a reservation. There would be about, say, 100 of those.

I seriously doubt that they will proceed to define and collect them, since those are probably 10% or less of all reservations, but lets say they would.

Then still, the conversation you make to make the reservation is a process in which you make the decision.

Say, there is a place inside at 20:00 or a place in the garden at 20:30. Are you going to let Google choose between the two options for you?

Do you imagine there would be an api in which you specify to the assistant, before it makes a call, your preferences in that much granularity?


You can use a character RNN that reads the fragmented input char by char, and feed its output to your NN.


Is it possible to create an adversarial example without access to the weights of a model, and without being able to forward many images through it?


One method of creating adversarial examples in a "black box" setting is to create and train a local model as a stand-in for the actual model using the inputs and outputs of the actual model. [1] So, the answer is "no" but a qualified "no" since in practice this seems to work. The second part, being able to forward many images, is also a qualified "no".

1 - https://arxiv.org/abs/1602.02697


Can autonomous vehicles work without deep reinforcement learning? I thought that things like negotiating entry into a crossroad required DRL.


Of course they can. DRL is a very very specific set of techniques to train decision-making over multiple timesteps.


I couldn't find a reference to this:"Tesla no longer sells cars with automatic driving capability." Can you point to one?


Search Google for "Tesla disables autopilot".


Thanks. I found that when they transitioned to Hardware2/AP2 they cut out most autopilot functionality, and since then have been gradually releasing updates that bring back that functionality.


Very interesting. I wonder if they tried to predict part-of-speech tags.


That would probably work. Karpathy's character based RNN could detect semantic meaning in text and code. http://karpathy.github.io/2015/05/21/rnn-effectiveness/


The article says 5 years old, so it's probably not deep learning.


This is the relevant chart:

http://www.jcmit.com/mem2015.htm

DIMM seems to go down slower lately.


Would Apple?


My summary of the slides: to achieve "common sense", Unsupervised Learning is the the next step. Use all information in the data for predicting everything you can including the past. Use adversarial networks to do that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: