Generative causal explanations of black-box classifiers (bibtex)
by M. O'Shaughnessy, G. Canal, M. Connor, M. Davenport and C. Rozell
Abstract:
We develop a method for generating causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data. The explanation is causal in the sense that changing learned latent factors produces a change in the classifier output statistics. To construct these explanations, we design a learning framework that leverages a generative model and information-theoretic measures of causal influence. Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output. Our method learns both global and local explanations, is compatible with any classifier that admits class probabilities and a gradient, and does not require labeled attributes or knowledge of causal structure. Using carefully controlled test cases, we provide intuition that illuminates the function of our causal objective. We then demonstrate the practical utility of our method on image recognition tasks.
Reference:
Generative causal explanations of black-box classifiersM. O'Shaughnessy, G. Canal, M. Connor, M. Davenport and C. Rozell. In Neural Information Processing Systems (NeurIPS), December 2020. (Acceptance rate 20%)
Bibtex Entry:
@InProceedings{oshaughnessy.20,
       author = 	 {O'Shaughnessy, M. and Canal, G. and Connor, M. and Davenport, M.  and Rozell, C. },
       title = 	 {Generative causal explanations of black-box classifiers},
      year =	 2020,
      booktitle =	 {Neural Information Processing Systems ({NeurIPS})},
 	 address = {Virtual meeting},
 	 month = dec,
 	 note = {(Acceptance rate 20\%)},
 	 abstract = {We develop a method for generating causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data. The explanation is causal in the sense that changing learned latent factors produces a change in the classifier output statistics. To construct these explanations, we design a learning framework that leverages a generative model and information-theoretic measures of causal influence. Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output. Our method learns both global and local explanations, is compatible with any classifier that admits class probabilities and a gradient, and does not require labeled attributes or knowledge of causal structure. Using carefully controlled test cases, we provide intuition that illuminates the function of our causal objective. We then demonstrate the practical utility of our method on image recognition tasks.},
	url = {https://arxiv.org/abs/2006.13913}
   }
Powered by bibtexbrowser