MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks

Alejandro Guerra-Manzanares, Farah E. Shamout

Research output: Journal PublicationArticlepeer-review

Abstract

Multimodal fusion leverages information across modalities to learn better feature representations with the goal of improving performance in fusion-based tasks. However, multimodal datasets, especially in medical settings, are typically smaller than their unimodal counterparts, which can impede the performance of multimodal models. Additionally, the increase in the number of modalities is often associated with an overall increase in the size of the multimodal network, which may be undesirable in medical use cases. Utilizing smaller unimodal encoders may lead to sub-optimal performance, particularly when dealing with high-dimensional clinical data. In this paper, we propose the Modality-INformed knowledge Distillation (MIND) framework, a multimodal model compression approach based on knowledge distillation that transfers knowledge from ensembles of pre-trained deep neural networks of varying sizes into a smaller multimodal student. The teacher models consist of unimodal networks, allowing the student to learn from diverse representations. MIND employs multi-head joint fusion models, as opposed to single-head models, enabling the utilization of unimodal encoders in the case of unimodal samples without requiring imputation or masking of absent modalities. As a result, MIND generates an optimized multimodal model, enhancing both multimodal and unimodal representations. It can also be leveraged to balance multimodal learning during training. We evaluate MIND on binary classification and multilabel clinical prediction tasks using clinical time series data and chest X-ray images extracted from publicly available datasets. Additionally, we assess the generalizability of the MIND framework on three non-medical multimodal multiclass benchmark datasets. The experimental results demonstrate that MIND enhances the performance of the smaller multimodal network across all five tasks, as well as various fusion methods and multimodal network architectures, compared to several state-of-the-art baselines.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2025
Publication statusPublished - 2025

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks'. Together they form a unique fingerprint.

Cite this