ASU Electronic Theses and Dissertations
- 1 English
- 1 Public
Humans perceive the environment using multiple modalities like vision, speech (language), touch, taste, and smell. The knowledge obtained from one modality usually complements the other. Learning through several modalities helps in constructing an accurate model of the environment. Most of the current vision and language models are modality-specific and, in many cases, extensively use deep-learning based attention mechanisms for learning powerful representations. This work discusses the role of attention in associating vision and language for generating shared representation. Language Image Transformer (LIT) is proposed for learning multi-modal representations of the environment. It uses a training objective based on Contrastive Predictive …
- Ramakrishnan, Raghavendran, Panchanathan, Sethuraman, Venkateswara, Hemanth Kumar, et al.
- Created Date