Improving SHAP Interpretability via Subgroup Discovery

Jan 1, 2026·
Maëlle Moranges
Maëlle Moranges
· 2 min read

Role: Postdoctoral Researcher at Inria in AIstroSight team

Collaborators : Thomas Guyet

Scientific and Technological Objectives

Predictive models in medicine often achieve high performance but remain opaque, limiting their clinical adoption. This research aims to produce explanations that are:

  • Model‑agnostic
  • Clinically interpretable
  • Capable of integrating interactions between variables
  • Sensitive to individual variability
  • Both global and local

To achieve this, we propose a post hoc, model‑agnostic approach that combines:

  • SHAP (SHapley Additive exPlanations) for explanation of internal model reasoning (local & global)
  • Subgroup Discovery for extracting explicit IF–THEN rules that describe patterns present in the data that actually contribute to model predictions.

The goal is to generate explicit rule‑based explanations that go beyond standard importance scores and offer richer insights into variable interactions and decision logic.

Challenges

  • Faithful Explanations: Ensuring generated explanations accurately reflect the model’s internal behavior.
  • Beyond Importance Scores: Providing explanations more precise and informative than simple feature importance.
  • Clinical Alignment: Aligning explainability outputs with clinical reasoning to support decision making.

Contributions

  • Developed a hybrid XAI + Data Mining framework that synthesizes SHAP values and Subgroup Discovery rules to produce comprehensible IF–THEN explanations.
  • Demonstrated that the approach yields both global summaries and local explanations with high fidelity to the predictive model.
  • Validated the method on multiple medical datasets, illustrating broader applicability beyond the clinical domain.

Dissemination and Publications

Scientific Presentations

📄 Poster – EGC 2026: “Enhancing SHAP Explanation Interpretability Using Subgroup Discovery”Conference website

🎤 Workshop – EXPLAIN AI 2026 : “Amélioration de l’interprétabilité des explications de SHAP grâce à la découverte de sous-groupes.”Workshop website

Science Communication

Facilitated a roundtable animation on explainability with Victoria Bourgeais, featuring interviews with Luis Galarraga and Olivier Teste.