Abstract:
Deep learning-based approaches have demonstrated great results; in handling
the complexities of multimodal data and learning informative repre-sentations
from heterogeneous modalities, these multimodal fusion techniques have
attracted considerable attention for their role in the integration of infor-mation
from different data modalities. In computer aided diagnosis (CAD) sys-tems, the
mixture of different information extracted from heterogeneous modal-ities, like
medical images, clinical data, genetic data, or textual reports, can pro-vide a
more comprehensive and reliable assessment of diseases or conditions. This
review article examines advances in deep multimodal fusion using hetero-
geneous neural networks for medical computer-aided-diagnosis (CAD) systems.