As someone that has just started learning machine learning I can say that, in my experience, Pytorch is way more beginner friendly than TF. I had a hard time to setup TF in order for it work with my GPU. On the other hand pytorch was a breeze.
JAX feels like the new kid on the block. Every time that I see it is because it has made a project a loot faster (eg. WhisperJAX or the stable diffusion in JAX).
Can anyone explain me why is it so fast is it because of parallelism?
Every Tom Dick and Harry has done that course and the collective inability of most to still understand ML suggests that the course isn’t great after all.
It is not. Does not give you the statistical habilities to work and does not give you the propper introduction to start for yourself. Andrew NG's inhability to teach is very obvios when he starts to talk about back propagation. Also his tone is not the correct one to transmit information at all.
Everytime this course is suggested, a kitty dies. There are better books and courses about ML and DL in udemy, for example. Those courses usually focus on production or theory, not both at the same time like andrew does.
You may be talking about the old course and not the revamped one. The revamped version of the course is now a Coursera "specialization" which are multiple courses that are either theory or project based
Also, my understanding is that TensorFlow is becoming less and less popular, so you might want to focus on PyTorch. Curious if others agree?
[1] See: https://phaseai.com/resources/free-resources-ai-ml-2024
[2] For example, scroll down to see the chart here: https://www.assemblyai.com/blog/pytorch-vs-tensorflow-in-202...