Skip to yearly menu bar Skip to main content


Poster

Model Explainability with Localized Soft Completeness

Ziv Haddad Haddad · Oren Barkan · Yehonatan Elisha · Noam Koenigstein


Abstract:

Completeness is a widely discussed property in explainability research, requiring that the attributions sum to the model’s response to the input. While completeness intuitively suggests that the model’s prediction is "completely explained" by the attributions, its global formulation alone is insufficient to ensure meaningful explanations. We contend that promoting completeness locally within attribution subregions, in a soft manner, can serve as a standalone guiding principle for producing high quality attributions. To this end, we introduce the concept of the completeness gap as a flexible measure of completeness and propose an optimization procedure that minimizes this gap across subregions within the attribution map. Extensive evaluations across various model architectures demonstrate that our method outperforms state-of-the-art explanation methods on multiple benchmarks.

Live content is unavailable. Log in and register to view live content