https://arxiv.org/pdf/1710.00794.pdf
Abstract.
We characterize three notions of explainable AI that cut
across research fields:
opaque systems
that offer no insight into its algo-
rithmic mechanisms;
interpretable systems
where users can mathemat-
ically analyze its algorithmic mechanisms; and
comprehensible systems
that emit symbols enabling user-driven explanations of how a conclusion
is reached. The paper is motivated by a corpus analysis of NIPS, ACL,
COGSCI, and ICCV/ECCV paper titles showing differences in how work
on explainable AI is positioned in various fields. We close by introducing
a fourth notion: truly
explainable systems
, where automated reasoning
is central to output crafted explanations without requiring human post
processing as final step of the generative process
网友评论