Hacker News new | past | comments | ask | show | jobs | submit login

What? You are comparing processing a NN. How is that comparing on "own terms"?



Did the old methods use neutral networks? I wouldn't be surprised if they did, but I would be surprised if they were as deep of networks as what people use today.

That is, I am interested in comparing them on speed of transcription, speech synthesises, error rates, etc. Not on speed of network execution.


No the old method did NOT use NN. I hope Google writes a paper and shares more details.

It is hard to believe they are able to do 16k samples a second through a NN even with the TPUs.

So be curious to see if reduced and how much?

If they really do have the ability to do 16k a second at scale that opens the door for all kinds of other applications.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: