While I respect Elon for his accomplishments, as an AI enthusiast, I don't like his view on AI. My guess is that, he sees AI from physics and sci-fi point of view.
Just because something is possible in theory doesn't mean it is in practice.
Interestingly, many of the pessimistic views on AI come from non-computer scientists such as economists, physicists, and philosophers. I wonder whether they have ever actually read a textbook on AI or machine learning instead of just thinking at a high level. If they have, they should have appreciated how hard AI actually is.
The thing I, as a former AI student, don't like about these kind of discussions, is that "AI" is taken to mean Strong AI, human level AI. It's not. We already have lots of AI, and none of it is human level. And we don't need human-like AI, because we already have billions of human-level intelligences in this world. We're better off making computers do stuff we hate and are bad at, and not the stuff we're good at or enjoy.
Just comparing the magnitudes of scale at which our brain operate compared to transistors tell you that AI doesn't stand a chance of coming anywhere close. The difference is that we are processing directly on the physical laws, while computers have any additional layer of abstraction in between, which are transistors.
Very interesting. Once they start stacking those in layers( 3d ) there might be some progress. At the end we might not ever require programmers anymore and we will have something awesome that we don't understand, just like our brain. :-)
AI doesn't need the same scale of ability to effectively subjugate many and various human roles.
This is evident in examples such as chess. What is also often forgotten is that a computer doesn't need to be perfect to be able to beat 99.9% of humanity.
However, the idea of an AI being able to effectively replicate a human in all it's complexity is so far off it should be at this point considered effectively 'impossible'.
Max Tegmark, a famous physicist argues that AI is possible. http://www.huffingtonpost.com/max-tegmark/humanity-in-jeopar...
Just because something is possible in theory doesn't mean it is in practice.
Interestingly, many of the pessimistic views on AI come from non-computer scientists such as economists, physicists, and philosophers. I wonder whether they have ever actually read a textbook on AI or machine learning instead of just thinking at a high level. If they have, they should have appreciated how hard AI actually is.