Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens when AI reaches provably better decision outcomes whose chain of logic is opaque to mere humans? Now you have a trusted black box that cannot be audited by humans.


How many people can figure out what's going on in my Toyota's on-board computer? Even Toyota's techs had trouble reading their own code when the stuck accelerator issue happened.

We're already in this black box hypothetical. I don't think its such a big change. I mean, can you audit the code in google's self-driving car? Who can?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: