That would be what regulation should address and not sidestep. Politics are there to make exactly those types of moral decisions. If you want certain behaviour from corporations using AI for something, it might be better to directly police the behaviour and not indirectly via peering inside tools.
Creating mountains of documentations on the model for a regulator will not do that.
Similarly, using the input/training data quality as vehicle to avoid, e.g., discrimination is weaker than directly targeting an outcome.
For some things, perhaps? Or some target accuracy/precision measures on others and fines for failing those. In the EU, individual harm can be quite cheap for corporations in some countries, that might also be an avenue.
There's no magic objective moralism box that you can put a decision into and have it weigh its consequences and a little green or red light turns on.
Good for whom? Under what assumptions was it made? In what context does it hold?
All of that is transparency.