Multimodal deep representation learning for video classification

H Tian, Y Tao, S Pouyanfar, SC Chen, ML Shyu - World Wide Web, 2019 - Springer
World Wide Web, 2019Springer
Real-world applications usually encounter data with various modalities, each containing
valuable information. To enhance these applications, it is essential to effectively analyze all
information extracted from different data modalities, while most existing learning models
ignore some data types and only focus on a single modality. This paper presents a new
multimodal deep learning framework for event detection from videos by leveraging recent
advances in deep neural networks. First, several deep learning models are utilized to extract …
Abstract
Real-world applications usually encounter data with various modalities, each containing valuable information. To enhance these applications, it is essential to effectively analyze all information extracted from different data modalities, while most existing learning models ignore some data types and only focus on a single modality. This paper presents a new multimodal deep learning framework for event detection from videos by leveraging recent advances in deep neural networks. First, several deep learning models are utilized to extract useful information from multiple modalities. Among these are pre-trained Convolutional Neural Networks (CNNs) for visual and audio feature extraction and a word embedding model for textual analysis. Then, a novel fusion technique is proposed that integrates different data representations in two levels, namely frame-level and video-level. Different from the existing multimodal learning algorithms, the proposed framework can reason about a missing data type using other available data modalities. The proposed framework is applied to a new video dataset containing natural disaster classes. The experimental results illustrate the effectiveness of the proposed framework compared to some single modal deep learning models as well as conventional fusion techniques. Specifically, the final accuracy is improved more than 16% and 7% compared to the best results from single modality and fusion models, respectively.
Springer