A new method for identifying key AI rationales linked to accuracy
A study conducted by Keisuke Kawano and Takuro Kutsuna in collaboration with Denso was published in NEURAL NETWORKS.
While artificial intelligence (AI) applications such as image recognition are becoming increasingly widespread, it remains difficult for humans to understand why AI systems make certain decisions. In particular, the reason why deep neural networks (DNNs) can maintain high prediction accuracy on unseen images is still not fully understood.
Previous studies have proposed various methods to estimate which parts of an image a DNN relies on for its predictions. However, identifying evidence that is strongly correlated with prediction accuracy remains a major challenge.
In this work, we define the Minimal Sufficient View (MSV) as the minimal region in an image that preserves the model’s prediction when the DNN and the input image are given. We also propose an algorithm to estimate MSVs. For instance, when applied to an image classified as a “cat,” the model may identify the eyes, ears, or facial stripes as individual MSVs. Through experiments, we demonstrate that the number of MSVs is positively correlated with the prediction accuracy of the DNN.
Importantly, our method does not require ground-truth labels, enabling model selection without access to labeled evaluation data. This approach has the potential to contribute to the development of more reliable and interpretable AI systems.
Title: Minimal sufficient views: A DNN model making predictions with more evidence has higher accuracy
Authors: Kawano, K., Kutsuna, T., Sano, K.
Journal Name: Neural Networks
Published: May 28, 2025
https://doi.org/10.1016/j.neunet.2025.107610