Skip to main content

ASU Electronic Theses and Dissertations


This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.


Contributor
Language
  • English
Date Range
2017 2019


In recent years, several methods have been proposed to encode sentences into fixed length continuous vectors called sentence representation or sentence embedding. With the recent advancements in various deep learning methods applied in Natural Language Processing (NLP), these representations play a crucial role in tasks such as named entity recognition, question answering and sentence classification. Traditionally, sentence vector representations are learnt from its constituent word representations, also known as word embeddings. Various methods to learn the distributed representation (embedding) of words have been proposed using the notion of Distributional Semantics, i.e. “meaning of a word is characterized by the company …

Contributors
Rath, Trideep, Baral, Chitta, Li, Baoxin, et al.
Created Date
2017

This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions …

Contributors
Kalpagam Ganesan, Ramsundar, Ben Amor, Hani, Yang, Yezhou, et al.
Created Date
2017

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we …

Contributors
Sur, Indranil, Amor, Heni B, Fainekos, Georgios, et al.
Created Date
2017

Visual Question Answering (VQA) is a new research area involving technologies ranging from computer vision, natural language processing, to other sub-fields of artificial intelligence such as knowledge representation. The fundamental task is to take as input one image and one question (in text) related to the given image, and to generate a textual answer to the input question. There are two key research problems in VQA: image understanding and the question answering. My research mainly focuses on developing solutions to support solving these two problems. In image understanding, one important research area is semantic segmentation, which takes images as input …

Contributors
Tian, Qiongjie, Li, Baoxin, Tong, Hanghang, et al.
Created Date
2017

LPMLN is a recent probabilistic logic programming language which combines both Answer Set Programming (ASP) and Markov Logic. It is a proper extension of Answer Set programs which allows for reasoning about uncertainty using weighted rules under the stable model semantics with a weight scheme that is adopted from Markov Logic. LPMLN has been shown to be related to several formalisms from the knowledge representation (KR) side such as ASP and P-Log, and the statistical relational learning (SRL) side such as Markov Logic Networks (MLN), Problog and Pearl’s causal models (PCM). Formalisms like ASP, P-Log, Problog, MLN, PCM have all …

Contributors
Talsania, Samidh, Lee, Joohyung, Lee, Joohyung, et al.
Created Date
2017

Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time …

Contributors
Gupta, Mayank, Turaga, Pavan, Yang, Yezhou, et al.
Created Date
2017

In this thesis, I propose a new technique of Aligning English sentence words with its Semantic Representation using Inductive Logic Programming(ILP). My work focusses on Abstract Meaning Representation(AMR). AMR is a semantic formalism to English natural language. It encodes meaning of a sentence in a rooted graph. This representation has gained attention for its simplicity and expressive power. An AMR Aligner aligns words in a sentence to nodes(concepts) in its AMR graph. As AMR annotation has no explicit alignment with words in English sentence, automatic alignment becomes a requirement for training AMR parsers. The aligner in this work comprises of …

Contributors
Agarwal, Shubham, Baral, Chitta, Li, Baoxin, et al.
Created Date
2017

Compressive sensing theory allows to sense and reconstruct signals/images with lower sampling rate than Nyquist rate. Applications in resource constrained environment stand to benefit from this theory, opening up many possibilities for new applications at the same time. The traditional inference pipeline for computer vision sequence reconstructing the image from compressive measurements. However,the reconstruction process is a computationally expensive step that also provides poor results at high compression rate. There have been several successful attempts to perform inference tasks directly on compressive measurements such as activity recognition. In this thesis, I am interested to tackle a more challenging vision problem …

Contributors
Huang, Li-chi, Turaga, Pavan, Yang, Yezhou, et al.
Created Date
2017

The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos. The feature extraction processes can …

Contributors
Chandakkar, Parag Shridhar, Li, Baoxin, Yang, Yezhou, et al.
Created Date
2017

In a collaborative environment where multiple robots and human beings are expected to collaborate to perform a task, it becomes essential for a robot to be aware of multiple agents working in its work environment. A robot must also learn to adapt to different agents in the workspace and conduct its interaction based on the presence of these agents. A theoretical framework was introduced which performs interaction learning from demonstrations in a two-agent work environment, and it is called Interaction Primitives. This document is an in-depth description of the new state of the art Python Framework for Interaction Primitives between …

Contributors
Kumar, Ashish, Amor, Hani Ben, Zhang, Yu, et al.
Created Date
2017