Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that a world where that comment holds up to scrutiny would be very different to the one we've got.

There's no magic objective moralism box that you can put a decision into and have it weigh its consequences and a little green or red light turns on.

Good for whom? Under what assumptions was it made? In what context does it hold?

All of that is transparency.



That would be what regulation should address and not sidestep. Politics are there to make exactly those types of moral decisions. If you want certain behaviour from corporations using AI for something, it might be better to directly police the behaviour and not indirectly via peering inside tools.

Creating mountains of documentations on the model for a regulator will not do that.

Similarly, using the input/training data quality as vehicle to avoid, e.g., discrimination is weaker than directly targeting an outcome.


Like a ban?

That could be a reasonable decision for people to make. If observability comes with accountability then it might eventually end up as the same thing.


For some things, perhaps? Or some target accuracy/precision measures on others and fines for failing those. In the EU, individual harm can be quite cheap for corporations in some countries, that might also be an avenue.


Right, I can see your point now and my interpretation in my first reply wasn't very accurate at all. Sorry about that.


No problem. I probably wasn't quite clear enough given the many angles of the problem.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: