I believe he is referring to something like expert systems explanations, which was the holy grail 20 years ago (I don't know if it has been achieved), and as opposed to neural networks which are more like black boxes (at least to me).
Ah I see, that's quite interesting. So the idea of a system that could explain its own decision-making and inferences?
Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case