AIMC Topic: Multimodal Imaging

Clear Filters Showing 91 to 100 of 289 articles

BrainSegFounder: Towards 3D foundation models for neuroimage segmentation.

Medical image analysis
The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This wor...

Preoperative Prediction of Axillary Lymph Node Metastasis in Patients With Breast Cancer Through Multimodal Deep Learning Based on Ultrasound and Magnetic Resonance Imaging Images.

Academic radiology
RATIONALE AND OBJECTIVES: Deep learning can enhance the performance of multimodal image analysis, which is known for its noninvasive attributes and complementary efficacy, in predicting axillary lymph node (ALN) metastasis. Therefore, we established ...

Validation of a novel AI-based automated multimodal image registration of CBCT and intraoral scan aiding presurgical implant planning.

Clinical oral implants research
OBJECTIVES: The objective of this study is to assess accuracy, time-efficiency and consistency of a novel artificial intelligence (AI)-driven automated tool for cone-beam computed tomography (CBCT) and intraoral scan (IOS) registration compared with ...

Enhancing the reliability of deep learning-based head and neck tumour segmentation using uncertainty estimation with multi-modal images.

Physics in medicine and biology
Deep learning shows promise in autosegmentation of head and neck cancer (HNC) primary tumours (GTV-T) and nodal metastases (GTV-N). However, errors such as including non-tumour regions or missing nodal metastases still occur. Conventional methods oft...

A multibranch and multiscale neural network based on semantic perception for multimodal medical image fusion.

Scientific reports
Medical imaging is indispensable for accurate diagnosis and effective treatment, with modalities like MRI and CT providing diverse yet complementary information. Traditional image fusion methods, while essential in consolidating information from mult...

Synthetic temporal bone CT generation from UTE-MRI using a cycleGAN-based deep learning model: advancing beyond CT-MR imaging fusion.

European radiology
OBJECTIVES: The aim of this study is to develop a deep-learning model to create synthetic temporal bone computed tomography (CT) images from ultrashort echo-time magnetic resonance imaging (MRI) scans, thereby addressing the intrinsic limitations of ...

Multi-Modal Electrophysiological Source Imaging With Attention Neural Networks Based on Deep Fusion of EEG and MEG.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
The process of reconstructing underlying cortical and subcortical electrical activities from Electroencephalography (EEG) or Magnetoencephalography (MEG) recordings is called Electrophysiological Source Imaging (ESI). Given the complementarity betwee...

A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions.

Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
PURPOSE: To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.

EMOST: A dual-branch hybrid network for medical image fusion via efficient model module and sparse transformer.

Computers in biology and medicine
Multimodal medical image fusion fuses images with different modalities and provides more comprehensive and integrated diagnostic information. However, current multimodal image fusion methods cannot effectively model non-local contextual feature relat...