Ah, my understanding was that this discussion is about AI as dev-time tool, where the output is code, which gets reviewed : merged / deployed like any other bit of code whether written internally or installed via a library.
Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.
That’s a real distinction, but auto-generated code has special complications in safety critical code review as well. It’s also not relegated to ML/AI, just that adds yet another complication to good verification and validation.
Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.