Poster
SIC: Similarity-Based Interpretable Image Classification with Neural Networks
Tom Wolf Wolf · Emre Kavak · Fabian Bongratz · Christian Wachinger
The deployment of deep learning models in critical domains necessitates a balance between high accuracy and interpretability.We introduce SIC, an inherently interpretable neural network that provides local and global explanations of its decision-making process.Leveraging the concept of case-based reasoning, SIC extracts class-representative support vectors from training images, ensuring they capture relevant features while suppressing irrelevant ones.Classification decisions are made by calculating and aggregating similarity scores between these support vectors and the input's latent feature vector. We employ B-Cos transformations, which align model weights with inputs, to yield coherent pixel-level explanations in addition to global explanations of case-based reasoning.We evaluate SIC on three tasks: fine-grained classification on Stanford Dogs and FunnyBirds, multi-label classification on Pascal VOC, and pathology detection on the RSNA dataset.Results indicate that SIC not only achieves competitive accuracy compared to state-of-the-art black-box and inherently interpretable models but also offers insightful explanations verified through practical evaluation on the FunnyBirds benchmark.Our theoretical analysis proves that these explanations fulfill established axioms for explanations. Our findings underscore SIC's potential for applications where understanding model decisions is as critical as the decisions themselves.
Live content is unavailable. Log in and register to view live content