Multimodal fine-tuning of clinical language models for predicting COVID-19 outcomes.

Henriksson A, Pawar Y, Hedberg P, Nauclér P

Artif Intell Med 146 (-) 102695 [2023-12-00; online 2023-10-31]

Clinical prediction models tend only to incorporate structured healthcare data, ignoring information recorded in other data modalities, including free-text clinical notes. Here, we demonstrate how multimodal models that effectively leverage both structured and unstructured data can be developed for predicting COVID-19 outcomes. The models are trained end-to-end using a technique we refer to as multimodal fine-tuning, whereby a pre-trained language model is updated based on both structured and unstructured data. The multimodal models are trained and evaluated using a multicenter cohort of COVID-19 patients encompassing all encounters at the emergency department of six hospitals. Experimental results show that multimodal models, leveraging the notion of multimodal fine-tuning and trained to predict (i) 30-day mortality, (ii) safe discharge and (iii) readmission, outperform unimodal models trained using only structured or unstructured healthcare data on all three outcomes. Sensitivity analyses are performed to better understand how well the multimodal models perform on different patient groups, while an ablation study is conducted to investigate the impact of different types of clinical notes on model performance. We argue that multimodal models that make effective use of routinely collected healthcare data to predict COVID-19 outcomes may facilitate patient management and contribute to the effective use of limited healthcare resources.

Category: Health

Category: Other

Funder: VR

Type: Journal article

PubMed 38042595

DOI 10.1016/j.artmed.2023.102695

Crossref 10.1016/j.artmed.2023.102695

pii: S0933-3657(23)00209-9


Publications 9.5.0