Skip to main content

ASU Electronic Theses and Dissertations


This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.


Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within …

Contributors
Walenchok, Stephen Charles, Goldinger, Stephen D, Azuma, Tamiko, et al.
Created Date
2014

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye …

Contributors
Hebert, Katherine Paige, Goldinger, Stephen D, Rogalsky, Corianne, et al.
Created Date
2016

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted …

Contributors
Tat, Michael Jon, Azuma, Tamiko, Goldinger, Stephen D, et al.
Created Date
2013

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under …

Contributors
Walenchok, Stephen Charles, Goldinger, Stephen D, Azuma, Tamiko, et al.
Created Date
2018

Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry to investigate how reward-related stimuli affect pupil size and attention. Across three experiments, response time, accuracy, and pupil were measured as participants searched for targets among distractors. Participants were informed that singleton distractors indicated the magnitude of a potential gain/loss available in a trial. Two visual search conditions were included …

Contributors
Phifer, Casey, Goldinger, Stephen D, Homa, Donald J, et al.
Created Date
2017

Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye movements may help address classic questions in recognition memory. Specifically, are episodic traces rich and detailed, characterized by a single strength-driven recognition process, or are they better described by two separate processes, one for vague information and one for the retrieval of detail? Three experiments are reported, in which participants …

Contributors
Papesh, Megan H, Goldinger, Stephen D, Brewer, Gene A, et al.
Created Date
2012

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: …

Contributors
Hout, Michael Craig, Goldinger, Stephen D, Azuma, Tamiko, et al.
Created Date
2013