Strittmatter A, Schad LR, Zöllner FG. Deep Learning-Based Affine Medical Image Registration for Multimodal Minimal-Invasive Image-Guided Interventions – A Comparative Study on Generalizability. Z Med Phys, 2024, 34 (2), pp.291-317.
https://doi.org/10.1016/j.zemedi.2023.05.003
Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks’ performance and the networks’ generalizability to new datasets were evaluated using two multimodal datasets – a synthetic and a real patient dataset – of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (p-value 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.