Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

I wrote the essay linked below [0] a few months ago. It is very relevant here. I argue that asking ML for explanations forces you to get a dumbed down version of the result, just like asking any expert to explain all the subtlety of what they are doing. Asking for explanations is a kind of micromanagement. There are instances where explanations are important (like research), but much less so in model deployment.

The better way is to focus on the results the models provide, and confidence that the model is making supported decisions (i.e. is not extrapolating or predicting on out of distribution data). This is how we would use other kinds of experts - validate their expertise and trust them when they are working in their area.

[0] https://www.willows.ai/blog/getting-more



When I meet a new person, it could be in a hiring situation, I would question that person to know how she thinks -- how she arrives at her conclusions. Whenever she has earned the trust of her collaborators, she can work mostly of her intuition.

Why should that be different for ML models? I would expect to be able to switch between a result based (intuition) and a more thorough explaining mode (XAI) to assess soundness of reasoning. And then I am also fully aware that complexity is increased when I turn on explanations.


Well I'd argue that if you take this approach to hiring, you can only hire people who share your area of expertise, that thinks the way you do. This is not generally the case with how ML models work. You could create an explainable model that shows you the things you want to see as part of it's decisions, but (a) that won't take advantage of the full strengths on ML, it's more like a hard coded model (b) it just pushes the problem down a level - great, it said it's a dog because it has teeth, how does it know what teeth are, (c) it still is subject failure on out of distribution or extrapolated sample data. Like I say, there is a place for explanations, especially for the ML scientist working on building a model. But for "managing" the model as I imagine one would in a healthcare setting, I think explainability as commonly construed (feature attribution) is the wrong focus


> Well I'd argue that if you take this approach to hiring, you can only hire people who share your area of expertise, that thinks the way you do.

What? Why?

There are different modes of reasoning. When I am to assess to quality of other peoples reasoning, I do not necessarily need to be an expert or have them think like I do.

Think semantics. You have declarative an operational semantics. It is perfectly alright to have multiple implementation of the same type deceleration, multiple proofs of the same proposition. It is not not really the proof that is of interest, but what modus of reasoning was used to arrive there.

I declare the role I need in a hiring position. And the person being interviewed tells me how she inhabits that role. I assess that is actually constitutes and inhabation.

Though it appears that we fundamentally agree as I read from you last sentence, that the "removal" of the explainable parts are only a last step before deployment.

> But for "managing" the model ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: