My prediction is that whoever comes up with the next forward leap in AI will be someone who at minimum has a firm grasp on the various branches of undergraduate level maths. Naively tinkering with heuristic statistical ML methods like neural nets and hoping that higher level intelligence somehow magically pops out isn't the way forward. We need a more sophisticated approach.
Pragmatically speaking, the majority of machine learning researchers right now are not trying to make a leap in AI, they 're just trying to get in on the money while the current funding frenzy lasts.
That is, for example, why it is possible to find people presumably seriously suggesting to:
3. Flashcard the Deep Learning Book (4-6m)
4. Flashcard ~100 papers in a niche (2m)
As a method to "bootstrap yourself into deep learning research".
I mean, it's clear to me that the language deployed in the article is ostensibly about teaching yourself to do machine learning research when what it's really discussing is how to get hired by one of the companies that are curently paying six-figure salaries for machine learning engineers etc.
Or I'm just old and cynical. Wait, let me find my false teeth so I can chew that over.
This is already being done in places such as the university of Arizona (Chomsky and his former students). The subject is narrower of course (computational linguistics and some neuroscience), but there are taking an approach that is more Galilean in nature, by designing experiments that reduce externalities rather that simply looking at massive amounts of data. I think that's what's going be the most useful, at least in areas that continue to be challenging for the current trends in AI, namely language.
This is logically independent from any claim about the value of formal education. I speak from experience that an undergraduate degree is not necessary in order to gain a firm grasp of undergraduate level math. Happy to elaborate if that is desired.
I'm sure it's possible to learn on your own, but I think most people would benefit from taking a few years of their lives to dedicate to learning surrounded by a community of teachers and like-minded classmates. Learning on your own requires a lot of discipline and dealing with solitude.
The OP is highlighting maths because deep learning in particular makes use of some light calculus and linear algebra, and the OP is probably mixing together AI, machine learning and Deep learning (as is common today, unfortunately, and I can't blame the OP for that, everyone's doing it).
However, there is a lot more to AI than high-school maths and I don't just meean -more maths. I mean knowledge, lore if you like. It's a field with a long history, stretching back to the 1930's even (before it was actually named as "AI" in Dartmouth, in the 1950's). A lot of very capable people have worked on AI for a very long time and have actually advanced their respective sub-fields each with leaps and bounds and it's not very sensible to expect new leaps while being completely clueless of what has been achived before. You can't stand on the shoulders of giants if you don't know that there are giants and that they have shoulders you can stand on.
Unfortunately, most people who enter the field today know nothing of all that, or even that there was an "all that" before 2012 (if they even know what happened in 2012; and to be honest, one wouldn't understand what 2012 means if one doesn't know what came before). So on the one hand they are not capable of making leaps and on the other hand they don't even know what a leap would look like. And probably think that a "leap" is a 10% improvement of the state of the art for a standard classification benchmark.
I agree with you though that what is needed to make leaps in AI is curiosity. Lots and lots of curiosity. Vast amounts of curiosity. Curiosity of the kind that you only find in people who are a bit zbouked in the head. Or just people who have a lot of time in their hands, to study whatever their fancy tells them to.
So- not the kind of person who flashcards The Deep Learning Book, if nothing else because that means the person doesn't have the time to, you know, actually read the damn book well enough to grokk it.
I mean seriously, what the fuck is it with the bloody flashcards?
I found it to take much less discipline actually. I find an unstructured and curiosity-driven approach to learning math to be much more enjoyable and effective than the typical school approach. You are right about the solitude issue, although I’m unsure about whether this approach to learning is an intrinsically lonely pursuit or if there’s a possible society where it’s not.
I know I’m just speaking from my own experience and what works for me doesn’t necessarily work for everybody. But my claim isn't that everyone should do as I did, my claim is that you're wrong that a self-taught ML researcher would necessarily only be able to make superficial contributions because they are bad at math.