Skip to main content

Context Recognition Methods using Audio Signals for Human-Machine Interaction


Abstract Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents a set of computational methods, that generalize well across different conditions, for speech-based applications involving emotion recognition and keyword detection, and ambient sounds-based applications such as lifelogging.

The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to diffe... (more)
Created Date 2015
Contributor Shah, Mohit (Author) / Spanias, Andreas (Advisor) / Chakrabarti, Chaitali (Advisor) / Berisha, Visar (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Subject Electrical engineering / Computer science / articulation / emotion recognition / lifelogging / speech analysis
Type Doctoral Dissertation
Extent 162 pages
Language English
Copyright
Reuse Permissions All Rights Reserved
Note Doctoral Dissertation Electrical Engineering 2015
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS


  Full Text
5.4 MB application/pdf
Download Count: 7570

Description Dissertation/Thesis