MI4MedFM | MICCAI 2026

Mechanistic Interpretability for Medical Foundation Models

MI4MedFM is a workshop held in conjunction with MICCAI 2026. It focuses on understanding how medical foundation models compute, where they fail, and how mechanistic interpretability can make them safer, more robust, and more reliable for clinical deployment.

MI4MedFM Pipeline: Clinical Images → Foundation Model with Interpretability Lens → Feature Attribution + Trustworthy Diagnosis
Updates

News

Apr 19, 2026
Submissions are Open!

We are excited to announce that the MI4MedFM submission portal is now officially open on OpenReview. We look forward to your innovative contributions! Submit here.

Mar 30, 2026
Website Launched

The official website for the MI4MedFM workshop at MICCAI 2026 is now live. Call for papers and important dates have been announced.

Mar 25, 2026
Workshop Accepted

MI4MedFM has been officially accepted as a workshop at MICCAI 2026 in Abu Dhabi.

Overview

Why this workshop matters now

Medical foundation models are becoming central to imaging and multimodal clinical AI, but high-stakes deployment needs more than saliency maps or attention plots. MI4MedFM centers mechanistic interpretability: understanding the internal computations that generate model behavior.

The workshop creates a focused MICCAI forum for methods such as sparse autoencoders, circuit discovery, activation patching, causal tracing, steering, and weight-space analysis — while keeping the discussion tightly connected to clinical validity, robustness, and deployment safety.

Workshop Focus

Four Core Pillars of MI4MedFM

Our scientific agenda is structured around four interlocking goals that translate mechanistic interpretability into robust clinical practice.

Pillar 1

Inspect internal model mechanisms

Understand what features, subcircuits, and latent representations medical foundation models rely on when producing clinically relevant behavior.

  • Sparse autoencoders and feature discovery
  • Circuit analysis and causal tracing
  • Activation-level understanding of decision pathways

Interactive 3D Layer Analysis

Hover over the foundation model representation below to expand the hidden layers and inspect the mechanistic computations driving clinical predictions.

Final Output & Concepts
Sparse Autoencoder Features
Early Attention Heads
🔍 Hover over the network to reveal layers
Topics

Topics of Interest

We welcome submissions across a wide range of themes bridging foundation model interpretability with safety and deployment.

Mechanisms

Features, circuits, and internal representations

Methods that uncover what medical models internally encode and how those computations are organized.

Faithfulness

Clinically meaningful validation

Tests that determine whether discovered mechanisms align with trusted medical concepts and downstream behavior.

Failures

Spurious cues, shortcuts, and hallucinations

Mechanistic debugging for brittle reasoning, domain shift, bias, and unreliable multimodal outputs.

Intervention

Editing, steering, and control

Using mechanism-level insight to improve robustness, correct behavior, and reduce harmful failure modes.

Deployment

Monitoring unsafe states in practice

Interpretability-guided monitoring and uncertainty-aware signals for safer clinical deployment settings.

Tooling

Scalable, usable open infrastructure

Frameworks and workflows that make mechanistic analysis practical for medical imaging and multimodal MedFMs.

Program

Workshop Program

A focused 2-hour program featuring a keynote, oral presentations, and an interactive poster session with networking.

Keynote

An invited talk framing trustworthy medical AI through the lens of mechanism and evidence.

Oral Presentations

Selected papers presented in concise oral sessions built for cross-disciplinary accessibility.

Poster & Networking

Poster-first visibility, direct discussion, and opportunities for clinicians and ML researchers to connect.

Prizes Award Ceremony

Celebrating the best paper and best presentation awards towards the end of the event.

09:00

Opening & Framing

Setting the agenda, scientific motivation, and workshop goals for the MICCAI audience.

09:10

Keynote Session

Flagship invited talk on trustworthy medical AI, interpretability, and clinical reliability.

09:50

Oral Presentations

Curated short talks presenting methods, evaluations, and lessons for medical foundation models.

10:30

Prizes Award Ceremony

Announcement of the Best Paper and Best Presentation awards, followed by concluding remarks.

10:40

Coffee Break & Networking

Poster interaction, demos, and in-depth discussion around methods and applications.

Keynote Speaker

Keynote Speaker

We are honored to feature a leading pioneer driving the future of trustworthy and transparent clinical AI.

Prof. Klaus Maier-Hein
Keynote Speaker

Prof. Klaus Maier-Hein

Div. Head at DKFZ & Full Professor at Heidelberg University

A renowned pioneer in medical image computing bridging deep learning with clinical translation. Recognized for state-of-the-art frameworks like nnU-Net, his work establishes critical benchmarks for robustness and AI deployment certainty.

Call for Papers

Call for Papers

We invite original research contributions on mechanistic interpretability for medical foundation models. Submissions are welcomed across two tracks.

Important Dates (All deadlines are Anywhere on Earth — AOE)

Paper Submission Deadline

July 15, 2026

Author Notification

August 5, 2026

Camera-Ready Deadline

August 18, 2026

Workshop Date

October 2026

Workshop Awards

🏆 Best Paper Award
🎤 Best Presentation Award
Track 1

Proceedings Track

Original, unpublished research intended for the MICCAI satellite-events LNCS proceedings.

  • Format: 8–10 pages (LNCS style) + references
  • Review: Double-blind peer review
  • Publication: Springer LNCS proceedings
Track 2

Non-Archival Track

Extended abstracts, previously published work, or work currently under review elsewhere.

  • Format: 4-page extended abstract + references
  • Review: Light review for relevance and quality
  • Publication: Not included in proceedings (non-archival)

Submission Guidelines

Template

Use the official Springer LNCS LaTeX template

Review Process

Double-blind with conflict-of-interest safeguards

Evaluation Criteria

Relevance, rigor, clarity, validation, and reproducibility

Submission Portal

OpenReview Submission Link

Topics include, but are not limited to:

Internal Mechanisms: Sparse autoencoders, circuit analysis, feature discovery
Clinical Validation: Faithfulness, concept alignment, clinically grounded benchmarks
Failure Modes: Tracing shortcuts, multimodal hallucinations, bias detection
Interventions: Feature steering, debugging, robust editing, inference monitoring
Organizers

Organizing Committee

Our international committee unites expertise in medical image analysis, foundation models, and trustworthy artificial intelligence.

Mohammad Yaqub
Faculty

Mohammad Yaqub

Associate Professor

MBZUAI
Muhammad Haris
Faculty

Muhammad Haris

Assistant Professor

MBZUAI
Muhammad Bilal
Faculty

Muhammad Bilal

Professor

Birmingham City University
Dwarikanath Mahapatra
Faculty

Dwarikanath Mahapatra

Assistant Professor

Khalifa University
Imran Razzak
Faculty

Imran Razzak

Associate Professor

MBZUAI
Yutong Xie
Faculty

Yutong Xie

Assistant Professor

MBZUAI
Ufaq Khan
Researcher

Ufaq Khan

Ph.D. Student

MBZUAI
Rishabh Lalla
Researcher

Rishabh Lalla

Ph.D. Student

MBZUAI
Umair Nawaz
Researcher

Umair Nawaz

Ph.D. Student

MBZUAI
Namrah Rehman
Researcher

Namrah Rehman

Ph.D. Student

MBZUAI
Satyajit Kishore Tourani
Researcher

Satyajit Kishore Tourani

Ph.D. Student

MBZUAI
Tausifa Jan Saleem
Researcher

Tausifa Jan Saleem

Postdoctoral Associate

MBZUAI
Queries

FAQ & Contact

Common questions about the workshop and how to reach the organizing committee.

When is the workshop taking place?

The MI4MedFM workshop is part of the MICCAI 2026 satellite events in Abu Dhabi. The exact date in October is TBA.

Will proceedings be published?

Yes. Submissions accepted under Track 1 (Proceedings Track) will be published in the official Springer LNCS MICCAI 2026 Workshop Proceedings.

Can I submit previously published work?

For Track 1 (Proceedings Track), submissions must be original and unpublished. For Track 2 (Non-Archival Track), you are welcome to submit extended abstracts of recently published work or work currently under review elsewhere.

Contact Us

If you have any further questions regarding the workshop, submission guidelines, or sponsorship opportunities, please feel free to email the organizing committee.

Email Organizers mi4medfm.info@gmail.com