Unfortunately, there is no real objectivity here. Machine learning systems are fed large numbers of human judgments, are tuned based on human judgment, and then are deployed when other humans think them ready.
The way you get reasonable consistency, whether it's humans or machines, is by establishing clear standards, using them for training, and then continuously monitoring results. It's not perfect, of course. But nothing is.
The way you get reasonable consistency, whether it's humans or machines, is by establishing clear standards, using them for training, and then continuously monitoring results. It's not perfect, of course. But nothing is.