Hacker News new | past | comments | ask | show | jobs | submit login

Gluon is an attempt by Microsoft and Amazon to regain some influence in AI tools. Keras looked like it was going to become the standard high-level API, but now Theano is dead, and CNTK and MxNet are controlled by Google's rivals, and they're ganging up against Google's tools. Francois Chollet is committed to keeping Keras neutral, but he's still a Google engineer, and that probably makes Microsoft and Amazon nervous. This is the equivalent of Microsoft creating C# in response to Java. The company that controls the API, has enormous influence on the ecosystem built atop the API, just like Google has had with Android, or MSFT with Windows. MSFT and AMZN are carving out their own user base, or trying to, at the price of fragmenting the Python community.



Unlike Keras and Tensorflow, Gluon is Define-by-run Deeplearning framework like Pytorch, Chainer. Network definition/debugging/flexibility are really better with dynamic network (define-by-run). That's why Facebook seem to use Pytorch for research and caffe2 for deployment. Gluon/Mxnet can do both define-by-run with Gluon API and "standard" define-and-run with it's Module API.


I think you're both right here. Competition will force the other to innovate. I don't think end users lose by there being multiple interfaces even if fragmentation is ultimately what happens here.

Standard formats and interop will help fix that.


I think you had a justifiable devil's advocate until your claim about "Fragmenting"

What exactly is so bad about competition?


What is so bad about standardization?


Choice.


Data scientists arguably have too much choice. 10 data scientists will have 50 different tools, can't share work or build on another's experiments or even remember what the result of an experiment were. those are some of the reasons why most data science projects fail. that and integrations. standardization has real benefits.


Of course standardization has benefits but how do you choose? Standardization only works if choice is eliminated so choice is a barrier to achieving standardization.


It often just comes down to project requirements. Eg, what kind of model is required? How hard would it be to build with tool x?

For example, a big reason why a lot of computer vision research was built (and sorta still is because of momentum) on caffe was pre existing model zoos.

A big reason why people choose TF (despite lacking dynamic graphs) is just because of existing community.

Requirements for both papers as well as industry will continue to evolve. Each framework will have their own trade offs.


There's tradeoffs to choice. In the case of another commenter "too much choice" means a ton of churn and a lot of friction when it comes to building models.

I think there's always a trade off of innovation vs stability that people should be thinking about here.

Granted, things like the model formats should help long term, but for now we're going to be dealing with a ton of churn on APIs.

I'm sure another thing like dynamic graphs will come along and we'll need to update the apis.

I suspect keras will respond to this at some point by adding primitives for eager mode and the like.

I know both data scientists who need more advanced models and others who prefer the keras api just building off the shelf models.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: