I think all of those are great, but don't think that has much to do with AI tbh. How you get to the outcome, and the standards that it meets, should be all that matters
You don’t think the black box nature of much of AI has any impact on its impact in safety critical applications?
Just look at the diverse and haphazard way AI has been used in autonomous driving. I would argue it’s a misplacement of the “move fast and break things” (in some cases at least) that has no place in public-facing safety critical applications.
It brings up some difficult questions regarding adequacy of testing at the very least when the underpinnings are not very interpretable.
Ah, my understanding was that this discussion is about AI as dev-time tool, where the output is code, which gets reviewed : merged / deployed like any other bit of code whether written internally or installed via a library.
Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.
That’s a real distinction, but auto-generated code has special complications in safety critical code review as well. It’s also not relegated to ML/AI, just that adds yet another complication to good verification and validation.