Skip to main content

ASU Electronic Theses and Dissertations


This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.




Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects of actions on these entities while doing reasoning and inference at the same time is a particularly difficult task. A recent natural language dataset by the Allen Institute of Artificial Intelligence, ProPara, attempted to address the challenges to determine entity existence and entity tracking in natural text. As part of …

Contributors
Bhattacharjee, Aurgho, Baral, Chitta, Yang, Yezhou, et al.
Created Date
2019

Question answering is a challenging problem and a long term goal of Artificial Intelligence. There are many approaches proposed to solve this problem, including end to end machine learning systems, Information Retrieval based approaches and Textual Entailment. Despite being popular, these methods find difficulty in solving problems that require multi level reasoning and combining independent pieces of knowledge, for example, a question like "What adaptation is necessary in intertidal ecosystems but not in reef ecosystems?'', requires the system to consider qualities, behaviour or features of an organism living in an intertidal ecosystem and compare with that of an organism in …

Contributors
Batni, Vaishnavi, Baral, Chitta, Anwar, Saadat, et al.
Created Date
2019

In this thesis, I present two new datasets and a modification to the existing models in the form of a novel attention mechanism for Natural Language Inference (NLI). The new datasets have been carefully synthesized from various existing corpora released for different tasks. The task of NLI is to determine the possibility of a sentence referred to as “Hypothesis” being true given that another sentence referred to as “Premise” is true. In other words, the task is to identify whether the “Premise” entails, contradicts or remains neutral with regards to the “Hypothesis”. NLI is a precursor to solving many Natural …

Contributors
Shrivastava, Ishan, Baral, Chitta, Anwar, Saadat, et al.
Created Date
2019

Reasoning with commonsense knowledge is an integral component of human behavior. It is due to this capability that people know that a weak person may not be able to lift someone. It has been a long standing goal of the Artificial Intelligence community to simulate such commonsense reasoning abilities in machines. Over the years, many advances have been made and various challenges have been proposed to test their abilities. The Winograd Schema Challenge (WSC) is one such Natural Language Understanding (NLU) task which was also proposed as an alternative to the Turing Test. It is made up of textual question …

Contributors
Sharma, Arpit, Baral, Chitta, Lee, Joohyung, et al.
Created Date
2019

Virtual digital assistants are automated software systems which assist humans by understanding natural languages such as English, either in voice or textual form. In recent times, a lot of digital applications have shifted towards providing a user experience using natural language interface. The change is brought up by the degree of ease with which the virtual digital assistants such as Google Assistant and Amazon Alexa can be integrated into your application. These assistants make use of a Natural Language Understanding (NLU) system which acts as an interface to translate unstructured natural language data into a structured form. Such an NLU …

Contributors
Garg, Prashant, Baral, Chitta, Kumar, Hemanth, et al.
Created Date
2018

Multimodal Representation Learning is a multi-disciplinary research field which aims to integrate information from multiple communicative modalities in a meaningful manner to help solve some downstream task. These modalities can be visual, acoustic, linguistic, haptic etc. The interpretation of ’meaningful integration of information from different modalities’ remains modality and task dependent. The downstream task can range from understanding one modality in the presence of information from other modalities, to that of translating input from one modality to another. In this thesis the utility of multimodal representation learning for understanding one modality vis-à-vis Image Understanding for Visual Reasoning given corresponding information …

Contributors
Saha, Rudra, Yang, Yezhou, Singh, Maneesh Kumar, et al.
Created Date
2018

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to …

Contributors
Aditya, Somak, Baral, Chitta, Yang, Yezhou, et al.
Created Date
2018

LPMLN is a recent probabilistic logic programming language which combines both Answer Set Programming (ASP) and Markov Logic. It is a proper extension of Answer Set programs which allows for reasoning about uncertainty using weighted rules under the stable model semantics with a weight scheme that is adopted from Markov Logic. LPMLN has been shown to be related to several formalisms from the knowledge representation (KR) side such as ASP and P-Log, and the statistical relational learning (SRL) side such as Markov Logic Networks (MLN), Problog and Pearl’s causal models (PCM). Formalisms like ASP, P-Log, Problog, MLN, PCM have all …

Contributors
Talsania, Samidh, Lee, Joohyung, Lee, Joohyung, et al.
Created Date
2017

In this thesis, I propose a new technique of Aligning English sentence words with its Semantic Representation using Inductive Logic Programming(ILP). My work focusses on Abstract Meaning Representation(AMR). AMR is a semantic formalism to English natural language. It encodes meaning of a sentence in a rooted graph. This representation has gained attention for its simplicity and expressive power. An AMR Aligner aligns words in a sentence to nodes(concepts) in its AMR graph. As AMR annotation has no explicit alignment with words in English sentence, automatic alignment becomes a requirement for training AMR parsers. The aligner in this work comprises of …

Contributors
Agarwal, Shubham, Baral, Chitta, Li, Baoxin, et al.
Created Date
2017

In recent years, several methods have been proposed to encode sentences into fixed length continuous vectors called sentence representation or sentence embedding. With the recent advancements in various deep learning methods applied in Natural Language Processing (NLP), these representations play a crucial role in tasks such as named entity recognition, question answering and sentence classification. Traditionally, sentence vector representations are learnt from its constituent word representations, also known as word embeddings. Various methods to learn the distributed representation (embedding) of words have been proposed using the notion of Distributional Semantics, i.e. “meaning of a word is characterized by the company …

Contributors
Rath, Trideep, Baral, Chitta, Li, Baoxin, et al.
Created Date
2017