120km/h looks like the typical global speed limit, but I haven't calculated it. It includes China, India, Brazil, several countries in the middle east and Africa.
If you're trying to learn about deep learning, I highly suggest using Python(Theano) or Lua(Torch). They're free and used by the experts in the field for research.
Even if you don't want to use the frameworks, you'll still have access to fast linear algebra routines.
Could someone can recommend me a book about deep learning and/or machine learning for this kind of open-source library ? I do not have any background in ML nor DL.
Although this is not for beginners of machine learning (learn that first), this is a book on deep learning that is currently in pre-publication and its being written by some big names in the field.
I'm taking the Coursera course right now. The course page at Stanford has a lot of student projects. The breadth of applications is pretty huge, definitely worth a check if you're looking for an idea.
I am working through Ng's course currently. It is hitting the right tones against my mathsephobia...keeping me constantly in that state of semiunderstanding that is intuition, a term Ng uses often.
His choice of Octave/MatLab simplifies issues of dependencies. In particular the soft ones of documentation and community. This is something a lot of academic contexts get wrong with software: the tools are either to open ended and students wind up manipulating matrices with forloops or there's an inflexible stack of professional tools that require massive effort to learn and an orthogonal community or there is a toy IDE based on a senior thesis.
Octave more or less follows the Unix philosophy of doing one thing and thus can meet many people where they are rather than with a one true way.
That's right; we can't know for sure and I'm sure you'll get some backlash for this comment. As with everything though, we need a balance.
If we say that we can't know for sure and give up, then no advancements would ever be made in any field. On the other hand, we should recognize that science does not claim to provide us with certainty. Many of us speaking "objectively" with scientific "facts" often become emotional and arrogant when speaking to anyone who doesn't agree with our conclusions.
I can say pretty certainly that all the text they've gathered through the Google Books project is in use in their language models and other AI models for their search engine, speech recognition, etc.
They got what they wanted. I can't see what incentive they have as a business to grant access to the books that justifies paying employees for it.
Just my personal opinion, but when you have an indexed copy of the whole web, a few million OCRed-but-not-corrected books from previous centuries added to your LM are not going to improve 2015 speech recognition quality.
It would illustrate how language and ideas evolve over time. It would illustrate how language and ideas that are from different geographical sources might differ or be similar, especially during pre-Internet periods. It would provide source the material which is being referenced in contemporary works. It would provide many, many other benefits.
Way way more than a corpus of a few million published books, that's for sure. Hell, there are individual message boards that have higher word count than millions of books. Wikipedia arbitration cases (these aren't articles, but rather, an esoteric back channel for handling disputes between users) frequently reach novel-length.
The average quality is going to be lower, of course.
The least interesting thing about Mexican American War is what type of dash you use between Mexican and American. There are over twenty thousand words about that dash on wiki meta.
15,000 words would be okay if at the end of it there was some kind of consensus, or something that could be tramsfered to different articles.
The future people are going to have a skewed image of us if they think meta wiki is representative.
Gather data on search queries and highlighted phrases for books? There is value in knowing which subset of the corpus is more valuable.
Apple acquired a "Pandora for Books" recommendation startup which had permission from publishers, who provided text for indexing, http://www.businessinsider.com/apple-buys-booklamp-2014-7 . Their machine classification made it possible to search books for topics whose words were not present in the book.
I've found reading the right books to be the best way. Lectures tend to be too broad for me, although they can serve as good overviews.
My philosophy is that if I want to learn something, I get a few suggestions for books, briefly skim through their contents and pick one. After that I'll take notes in it and reread and basically squeeze every ounce of knowledge that I possibly can from it until it's engrained in my brain.
A good experiment might be to watch 1,000 hours of educational videos and see if you've actually learned enough to test out of an introductory course on the subject. I could see that happening with, say, Khan Academy, but less so with the various popular "wow, science!" types of videos.
Do you think if you had read the 1000 hours of wood working books that you'd be any better off?
There clearly comes a point that even in depth studying of a topic has limited ability to translate to real world skill. Very few things can translate well from study go skill.
Just like going to all the lectures and not doing any homework isn't going to prepare you for finals. Videos introduce and explain ideas, but you need to puzzle through problems yourself to gain a real understanding.
It's not about knowledge, it's about grasping. When the thing is made easy to grasp, you increase knowledge, not global understanding. To understand better, you have to practice understanding. (supposedly)
And so rather than watching digested videos, I recommend reading the works and essays of great minds, and struggling (reasonably), to embrace their way of thought, while appreciating the content. Understand some, then go again, and understand more. Socrates taught Plato, and Plato taught Aristotles. Is it a coincidence? I would find the greatest mind I can find, first among other strategies. Personally, I've started reading Emerson, everyday, or Montesquieu. I write parts I like, on paper, and try to understand better along, why he says what he says, and why he writes how he writes.
But this is supposing I'm smart, and have my word to say... Well, let us settle on the compromise then, and experiment? Because also, different people are at different levels of grasping, and also, they have different definitions for 'smart', so you should do depending on what you want to become, and which 'they' you want to be a part of.
Personally, I don't care that much, it just feels good to be so near a great mind, though surely everyone has his own view of what that is.
the ability to grasp is more valuable than the knowledge though-- this is why my alma mater didn't really worry too much about making the material digesting us.
not joking. Frankly, it makes my degree more valuable
I didn't say grasping is superior to knowledge, I said the optimum path to increase grasping (intelligence) may be different than the optimum path to increase knowledge, and that if this is the case, then I would train with hard-to-grasp material rather than easy-to-digest videos. I admit my first sentence was badly worded (I'm not English, though it's no excuse).
I'm with you for the most part. However, learning bits and pieces of something without any indication that the source is credible (which is often the case with youtube videos), could be less helpful than learning nothing at all.
Thus 'getting smarter' could be an overstatement in this case.