作者
Anil Rahate,Rahee Walambe,Sheela Ramanna,Ketan Kotecha
摘要
Multimodal deep learning systems that employ multiple modalities like text, image, audio, video, etc., are showing better performance than individual modalities (i.e., unimodal) systems. Multimodal machine learning involves multiple aspects: representation, translation, alignment, fusion, and co-learning. In the current state of multimodal machine learning, the assumptions are that all modalities are present, aligned, and noiseless during training and testing time. However, in real-world tasks, typically, it is observed that one or more modalities are missing, noisy, lacking annotated data, have unreliable labels, and are scarce in training or testing, and or both. This challenge is addressed by a learning paradigm called multimodal co-learning. The modeling of a (resource-poor) modality is aided by exploiting knowledge from another (resource-rich) modality using the transfer of knowledge between modalities, including their representations and predictive models. Co-learning being an emerging area, there are no dedicated reviews explicitly focusing on all challenges addressed by co-learning. To that end, in this work, we provide a comprehensive survey on the emerging area of multimodal co-learning that has not been explored in its entirety yet. We review implementations that overcome one or more co-learning challenges without explicitly considering them as co-learning challenges. We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations. The various techniques, including the latest ones, are reviewed along with some applications and datasets. Additionally, we review techniques that appear to be similar to multimodal co-learning and are being used primarily in unimodal or multi-view learning. The distinction between them is documented. Our final goal is to discuss challenges and perspectives and the important ideas and directions for future work that we hope will benefit for the entire research community focusing on this exciting domain.