I remember 10 years ago I was so interested in and hopeful that learning algorithms would revolutionize diagnosis and treatment in medicine. Instead it really looks like all these algorithms, after being trained by biased apes, are only cementing bad policy behind faceless decision makers. The government loves them because it removes accountability and access; especially when they can use private entities to do so.
I think there are two things at least that must be done:
1. Make individual people accountable for the decisions that 'AI' systems make.
2. Foster a culture of critique within AI development and deployment.
And, assuming you accept that there really is a “problem” here, what exactly is the solution? To encumber it? Encumber it how, exactly?