Skip to main content

Categorical Contextual Cueing in Visual Search


Abstract Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jian... (more)
Created Date 2014
Contributor Walenchok, Stephen Charles (Author) / Goldinger, Stephen D (Advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Subject Cognitive psychology / Psychology / categorization / contextual cueing / visual search
Type Masters Thesis
Extent 51 pages
Language English
Copyright
Reuse Permissions All Rights Reserved
Note Masters Thesis Psychology 2014
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS


  Full Text
1.1 MB application/pdf
Download Count: 973

Description Dissertation/Thesis