CHEN Peng, LI Qing, ZHANG De-zheng, YANG Yu-hang, CAI Zheng, LU Zi-yi. A survey of multimodal machine learning[J]. Chinese Journal of Engineering, 2020, 42(5): 557-569. DOI: 10.13374/j.issn2095-9389.2019.03.21.003
Citation: CHEN Peng, LI Qing, ZHANG De-zheng, YANG Yu-hang, CAI Zheng, LU Zi-yi. A survey of multimodal machine learning[J]. Chinese Journal of Engineering, 2020, 42(5): 557-569. DOI: 10.13374/j.issn2095-9389.2019.03.21.003

A survey of multimodal machine learning

  • “Big data” is always collected from different resources that have different data structures. With the rapid development of information technologies, current precious data resources are characteristic of multimodes. As a result, based on classical machine learning strategies, multi-modal learning has become a valuable research topic, enabling computers to process and understand “big data”. The cognitive processes of humans involve perception through different sense organs. Signals from eyes, ears, the nose, and hands (tactile sense) constitute a person’s understanding of a special scene or the world as a whole. It reasonable to believe that multi-modal methods involving a higher ability to process complex heterogeneous data can further promote the progress of information technologies. The concepts of multimodality stemmed from psychology and pedagogy from hundreds of years ago and have been popular in computer science during the past decade. In contrast to the concept of “media”, a “mode” is a more fine-grained concept that is associated with a typical data source or data form. The effective utilization of multi-modal data can aid a computer understand a specific environment in a more holistic way. In this context, we first introduced the definition and main tasks of multi-modal learning. Based on this information, the mechanism and origin of multi-modal machine learning were then briefly introduced. Subsequently, statistical learning methods and deep learning methods for multi-modal tasks were comprehensively summarized. We also introduced the main styles of data fusion in multi-modal perception tasks, including feature representation, shared mapping, and co-training. Additionally, novel adversarial learning strategies for cross-modal matching or generation were reviewed. The main methods for multi-modal learning were outlined in this paper with a focus on future research issues in this field.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return