Everything you say is true, but in context the important point is that the sort of biases ML models pick up are overwhelmingly related to training insufficiencies and are often incredibly difficult to spot unless you already know they exist. For a practical example, see the recent Twitter image cropping oddities (https://twitter.com/bascule/status/1307440596668182528).
The idea (as quoted) that models are routinely picking up biases directly from researchers is complete nonsense.
Right, that's some kind of human interest journalist fudging and it's not true. But bias/surprising wrong answers in ML is obviously a real problem and fixing the data is not always the right answer. You might not be able to tell what's wrong with the data, or where you could get any more of it, and you might be reusing a model for a new problem and not have the capability to retrain it.
The idea (as quoted) that models are routinely picking up biases directly from researchers is complete nonsense.