Skip to main content

Data-Driven Representation Learning in Multimodal Feature Fusion

Abstract Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction.

We first consider sensor fusion, a typical multimodal fusion pro... (more)
Created Date 2018
Contributor Song, Huan (Author) / Spanias, Andreas (Advisor) / Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioglu, Cihan (Committee member) / Arizona State University (Publisher)
Subject Computer science / Deep Learning / Feature Fusion / Multimodal Learning / Representation Learning
Type Doctoral Dissertation
Extent 164 pages
Language English
Note Doctoral Dissertation Electrical Engineering 2018
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS

  Full Text
7.3 MB application/pdf
Download Count: 82

Description Dissertation/Thesis