My Research

My primary (funded) research focus is on speech perception and memory. With my students, however, I conduct research on several different topics. The links below will jump to descriptions of our various research areas. Cited papers can be also downloaded from the Vita / Publications page.

Top of Page | Word Processing | Face Processing | Visual Search and Attention
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Word Processing

In a continuing line of research, we study memory for "surface" aspects of spoken or printed words, such as voices or fonts. This research intersects perception and memory: Most theories presume that words are represented as ideals in memory, devoid of perceptual "noise." Multiple-trace theories, however, propose that perceptual details are not forgotten. Instead, they are stored, and help mediate later perception.

Spoken Word Perception / Recognition

My research on spoken word perception and recognition has largely focused on testing the heuristic value of a multiple-trace approach to the mental lexicon, emphasizing storage of detailed episodes, rather than abstract lexical nodes. In several studies (see Goldinger et al., 1991; Palmeri et al., 1993), we established that voice information is reliably encoded into memory during perception. I later found that episodic traces are stored with impressive detail, lasting at least a week (Goldinger, 1996).

In another experiment, I tested Hintzman's (1986) MINERVA 2 memory model against speech production data from a single-word shadowing task (Goldinger, 1998). Although the typical measure in shadowing is response time, an alternative, rarely used measure is the acoustic content of the spoken responses: I examined changes in speech acoustics during shadowing. Words (and nonwords) for subjects to shadow were produced by multiple speakers. People spontaneously imitated the stimulus voices. Moreover, the model correctly predicted that the strength of imitation was affected by "abstract" characteristics of the stimuli. For example, rare words promoted stronger imitation than more common words. The acoustics of speech in shadowing reflect a complex interplay of the stimuli and their traces in episodic memory. We later extended these findings to reading (Goldinger & Azuma, 2004).

We have recently been examining voice specificity effects using eye movements and pupil dilations as additional dependent measures. In one experiment, participants are trained to make lexical decisions to spoken words by clicking on a blue cross or a red X, each of which is assigned to one horizontal half of the screen, but appears in a random location on a trial-by-trial basis. Each word is initially spoken by one of four speakers (two male, two female). Upon their second presentation, words are spoken in either the same or a different voice (half within-gender and half across-gender changes). When we examine either the time to initiate a saccade or the time to issue a behavioral response, we find clear voice effects - everything is faster when the voice is repeated.

Voice effects are also revealed through pupillary reflexes. In a study of spoken word memory, participants studied a series of words by male and female speakers. In a later test, words were presented in the same voice, a different (new) voice, or a familiar voice (one of the other studied voices). As you can see in the graph to the right, participants' pupils dilated significantly more when they were processing words spoken by a new speaker. When the voice was old or even just familiar, they expended less cognitive effort.

Voice effects are revealed through pupillary reflexes

Printed Word Perception / Recognition

Our lab also conducts studies of printed word perception. In one project, we examined inner speech in silent reading (Abramson & Goldinger, 1997,). This is a classic topic, reaching back to Titchener and Watson. In our study, we tested the effects of implicit variations in words' vowel lengths. Lexical decisions were slower for phonetically longer stimuli, despite equal letter lengths (Lukatela et al., 2004, later replicated our finding). The data suggested that acoustic representations activated in silent reading are inner speech, rather than abstract phonological codes. These inner speech effects were most strongly expressed in slower readers (see also Lewellen et al., 1993).

Many of our reading projects have been empirical follow-ups to several theoretical articles I wrote with Guy Van Orden about a resonance framework for printed word perception (e.g., Van Orden & Goldinger, 1994). In this theory, word perception occurs as orthographic, phonologic, and semantic features come into stable, resonant feedback loops (an idea that is shared with many other theories). The schematic figure below depicts the hypothesized processes that occur during perception of the inconsistent word PINT.

The process of perceiving the word PINT

This theoretical framework motivated numerous experiments in our lab and others. In one study (Goldinger et al., 1997), we studied the role of attention and working memory in delayed naming. We found that word frequency effects in delayed naming partly reflect different attention demands across words. We recently extended this finding, showing that low-frequency words create greater pupil dilation than high-frequency words, again in delayed naming.

Other projects have focused on the interactions among knowledge sources in word reading. For example, in one study (Gottlob et al., 1999), we examined the dynamics that emerge among words' spellings, pronunciations, and meanings. We did this by comparing control words, homonyms, and homographs (see figure below) in different experimental contexts.

Our research on printed word perception is ongoing. One of our new research topics is how lexical access differs when people read handwritten words, relative to printed words. In one study (Barnhart & Goldinger, 2010), we replicated a series of well-known experiments, always comparing computer print and human cursive (as in the example stimuli shown). Our results were very consistent: Whenever some lexical variable is most naturally considered a "top-down" variable, its effects were increased for handwritten words. For example, when words are more common in the language (i.e., high-frequency), they are recognized more quickly and accurately. This effect is far stronger with handwriting (see sample data), as were consistency effects and semantic effects.

We are currently conducting many different studies using handwriting. For example, we have been testing whether the cerebral hemispheres are differentially responsible for processing words in print versus handwriting. We also have very interesting new data about the perceptual effects of rotating handwritten words. More to come soon!

Top of Page | Word Processing | Face Processing | Visual Search and Attention |
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Face Processing

Although much of our research involves verbal materials, we also conduct research on face memory. Some studies are standard face-learning/recognition experiments; others are closer to eyewitness memory.

In one experiment (Kleider & Goldinger, 2001), students came to a classroom to watch a film about emotion and memory. They were asked to watch carefully, preparing for a later questionnaire. While these instructions were being delivered, two people entered the room, asking to retrieve a slide projector and a full slide carousel. After receiving permission, they crossed in front of the classroom to the equipment. One took the projector; one took the slides. As they headed out, the latter person dropped the carousel, making a loud noise and scattering slides across the floor. He (or she) would then pick up the slides, apologize for the disruption, and both people would exit the room. After this event, subjects watched the film and received the questionnaire. Then, as a surprise, they were asked to recognize "the dropper" from a photo lineup. (In post-test questions, only two people reported suspicion about the staged event.)

There were many conditions in this research (we tested > 950 participants, dropping our poor slides again and again). The main point is easily illustrated by Experiment 1: In control conditions, both confederates were White. In experimental conditions, only the "dropper" was White; the other confederate was African-American (Black). The results were rather striking, showing that the mere presence of a Black confederate reduced memory for the dropper. The results suggest that people allocated undue attention toward the Black confederate, reducing memory for other aspects of the witnessed event. (This was true regardless of the subjects' own race.) Another experiment suggested that the effect was truly specific to race; it was not caused by other salient physical features. A final experiment verified that extra attention was directed toward the Black confederates.

A lot of our research examines the other-race bias in face memory, the relative inability to discriminate between members of another (generally minority) race. To investigate a well-known theory of this phenomenon (Valentine's Multidimensional Face Space), we recently conducted a multidimensional scaling analysis of own and other-race face spaces (Papesh & Goldinger, 2010). Using FaceGen Modeler, we created two sets of physically identical faces, and each set was saved with both a Black skin tone and a White skin tone. Participants viewed two faces at a time and made speeded "same-different" judgments, which were used as a measure of similarity. The overall plot, shown to the right, confirms Valentine's predictions: Inter-face distance was greater in the White set, relative to the Black set, despite their structural identity.

We also recently tested other-race effects in face perception using both eye-tracking and pupillometry. Participants' eyes were monitored while they tried to a series of Asian and White faces as stimuli. The graph to the right (adapted from Goldinger, He, & Papesh, 2009) shows the pupillary reflexes from a group of White students (top graph) and Asian students (bottom graph). Across both groups, pupils dilated more in response to the more cognitive challenging, cross-race faces. Examining only the lines representing studying cross-race faces, you can see that participants who eventually scored well on the recognition memory test expended greater cognitive effort, relative to their low-scoring counterparts. Follow-up experiments are currently underway.

Top of Page | Word Processing | Face Processing | Visual Search and Attention |
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Visual Search and Attention

In a new line of research, we have been investigating the incidental learning that occurs when people perform visual search for objects. Some of our findings have been quite surprising. For example, Hout and Goldinger (2010) had participants search for pictures of objects against a background of distractor objects. On any trial, people had to keep only one target in mind, or had to look for one of three potential targets (the example, shown on the right, shows a trial with three targets). In different conditions, the same background objects were used over and over again (for 40 trials). Our findings were neat: First, even when the background objects were scrambled from trial to trial, people became faster to search as they gained experience. They also learned the background objects, as shown in a recognition test.

Of particular interest, people learned the background objects better when they had more potential targets in mind! (Sample data are shown to the right.) When we first discovered this, we worried that people remembered more because they were searching more slowly. We then replicated the finding using a "stream" of images, shown once at a time, for constant amounts of time. People still learned more about the background objects when they had more potential targets in mind. Our hypothesis (at this point) is that people engage "deeper encoding" when there are more complicated evaluations involved.

In our current research, we are conducting similar experiments using eye-tracking. Our goal is to relate object learning to different aspects of oculomotor behavior, such as the number of times the eyes fall on an object. We are hoping that, by monitoring eye-movements, we can better understand how object learning improves performance in visual search.

Top of Page | Word Processing | Face Processing | Visual Search and Attention |
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Working Memory Capacity

Another line of our research involves the assessment of individual differences in working memory capacity (WMC), and their implications for mental control. In one of our earlier studies on WMC (Goldinger et al., 2003), we examined individual differences in counterfactual thinking - the nearly automatic "if only" thinking that occurs following an unexpected (usually negative) event. For examine, imagine that you usually play the lottery, always with the same numbers. One day, you decide to switch the numbers, just to keep the fates guessing. Then, your usual, unselected numbers are drawn, and you win nothing. The "if only" thoughts would likely be unbearable.

In our study, counterfactual thinking was defined as victim blaming, which varied as a function of peoples' WMC. Mock juries were exposed to scenarios, always ending with some disaster for the main actor. We contrasted control and counterfactual versions of the stories - control stories had no salient event that might have "undone" the negative event (e.g., in our scenario above, you played your usual lottery numbers and lost). Counterfactual stories all had salient decisions by the actors that may have led to different outcomes. For example, in one story, Paul attends a basketball game, sitting in his usual seat. He sees an open seat closer to the court and decides to take it. In the control story, he stays in his usual spot. In both stories, a light fixture falls from the ceiling, breaking Paul's foot. He sues the management company and you are on the jury deciding on the award.

Obviously, Paul's decision to move has no real bearing on the outcome (the stadium is still responsible), nor should it affect jury decisions. However, people tended to blame the victim when they read the counterfactual stories. This tendency only occurred only people with lower working memory spans who were holding other information in mind when they made their decision (as shown in the figure to the right).

More recently, we (Hansen & Goldinger, 2009) examined individual differences in working memory capacity by asking high and low span individuals to play the game Taboo™. We chose Taboo™ because it's fun, and it requires cognitive control. One person delivers clues to a teammate about a secret word, while keeping in mind a list of five "taboo" words that cannot be used as clues. We found that high-span participants made fewer "taboo" errors than their low-span counterparts. In fact, high-span individuals were better all-around players; they gave better clues, made better guesses, and repeated incorrect guesses less often (see figure on the left).

In brand-new work, we have discovered that individual differences in WMC affect performance in a task that requires no memory and no judgments, only simple perceptual-motor tracking. In two new experiments, participants were screened for WMC, and then performed a simple video-game-style task. Specifically, a ball moved around on the computer screen (as shown in the figure), and people had to track it with the mouse (we made the cursor into a circle; their job was to keep the ball inside). The ball randomly changed speed and direction, making this challenging.

Without getting into details (the manuscript is still under review), we found that WMC strongly predicted peoples' accuracy in ball-tracking, from the very first moments of the task. This result was surprising, as there is essentially nothing for participants to remember or manipulate. We are continuing this research now.

Top of Page | Word Processing | Face Processing | Visual Search and Attention |
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Recognition Memory

When people perform a recognition memory ("old-new" discrimination) task, they use different sources of information. For example, they may recall specific learning episodes. Alternatively, they may rely on general feelings of familiarity, without specific recollection. Research using words has shown that "old" responses are increased by enhancing words during testing, for example, by showing an occasional word a little bit brighter on the computer. When a test word seems to "jump off the screen," relative to other words, people often mistake this for prior experience - an illusion of memory. Conversely, actual memory for a word can create the reverse illusion, such that perception seems better for "old" items.

We tested these ideas in a study of face recognition (Kleider & Goldinger, 2004). In some experiments, people initially studied clear faces. During testing, some faces were clear and others were blurry (see the image below). We found the same basic result every time - although these perceptual changes were quite obvious, people increased their "old" responses to the clear faces. We then created the opposite pattern, as previously seen faces created the illusion of being easier to see.

We have also examined the "feelings" (quite literally) that accompany memory decisions using a subliminal ass-buzz (Goldinger & Hansen, 2005). As noted above, people use different sources of information in recognition memory decisions. In this experiment, we examined whether we could create feelings of memory by pairing test items with a subliminal buzz presented to their butts (using hidden speakers under the chair, as shown).

Participants first memorized words, pictures, and faces (in separate blocks). After a distraction, they performed a recognition memory test. In half of the test trials (old and new), test items were presented with a subliminal buzz. People made "old-new" decisions, then estimated confidence on a 7-point scale. In a control experiment, the buzz was stronger and easier to perceive. (Imagine a spastic woodpecker under your chair...)

Finally, what about contrasting clear recollection versus "gut feelings" of familiarity? Although we cannot directly create these states in people, we contrasted easy and hard to remember items. For example, easy and hard faces were photos of celebrities and medical students, respectively.

Our results (shown in the figure on the right) were similar for words, pictures, and faces. Given the buzz, people were more likely to respond "old," increasing hits and false alarms (Panel A). In terms of confidence, the buzz had opposite effects, depending on the accuracy of memory. Given hits, the buzz reduced confidence. But given false alarms, the buzz elicited relatively high confidence. When you truly have no memory, the buzz gives you a tingle of confidence, but when you have a memory, the buzz gives you a tingle of doubt. These findings were in line with the predictions of a model called SCAPE, as the same signal created a different memorial interpretation, based on context (Whittlesea & Williams, 2001). They were also funny.

In another series of experiments, Heather Kleider and I tested whether people can form false memories for witnessed actions. In particular, we were interested in the effects of schemas (stereotypes) on false memories. People viewed slide shows, showing people performing either schema-consistent or inconsistent actions (as shown in the examples). As shown in the sample data, people were rarely fooled in an immediate test. However, with the passage of time, false memories were selectively increased for schema-consistent actions.

In new research, we are testing memory using physiological indices - specifically, the pupillary reflex. An old finding in cognitive psychology is that, as people expend greater cognitive effort, their pupils enlarge. Although we have several investigations still underway, one that we have replicated several times in the pupillary "remember-know" effect. We had participants first memorize a series of own- and other-race faces while we tracked their pupil sizes. During a later test, we asked them to give "remember/know/new" decisions. We have repeatedly observed that peak pupil diameters during subsequent "remember" responses are smaller, relative to subseqent "know" responses (see figure to the right). Thus far, the results suggest that details memories are "easy" to encode. The results from this experiment, and one using Jacoby's process-dissociation procedure, are being prepared for a manuscript now.

Top of Page | Word Processing | Face Processing | Visual Search and Attention |
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Single-Cell Recordings

I am currently involved in two funded projects with Dr. Peter Steinmetz from the Barrow Neurological Institute (BNI) at St. Joseph's Hospital in Phoenix. Dr. Steinmetz works in an epilepsy unit where patients with medically intractable epilepsy can be surgically implanted with depth electrodes to localize the focal point of their seizures. Using modified microwires (see figure), researchers can record from single neurons during cognitive processing.

We currently have several ongoing projects in various stages of completion. In one project (that is nearly done), we have been testing format-specific memory for words using continuous recognition memory. For this experiment, patients see or hear a word twice, after lags of 1, 2, 4, 8, 16, or 32 intervening trials. Upon its second presentation, the word is in either its original form, or some change occurs (e.g., voice or font). The data currently indicate that memories are sensitive to the repetition in stimulus format, with no change over lags (see raster plot).

In another ongoing project, we are presenting different faces to patients. For example, in one study, patients are shown synthetic faces (as in the example set). As shown, the faces in each set "morph" from White to Black, while their emotional expressions are held constant. Our goal is to measure activity in the hippocampus and amygdala while people classify the expressions. The research is ongoing, but our question of interest is whether perceived race will interact with the brain's responses to different emotions. Stay tuned!

Top of Page | Word Processing | Face Processing | Visual Search and Attention
Working Memory Capacity | Recognition Memory | Single-Cell Recordings

Stephen Goldinger

Contact Information:

  • Stephen D. Goldinger
  • Arizona State University
  • Department of Psychology
  • P.O. Box 871104
  • Tempe, AZ 85287-1104
  • E-mail: goldinger@asu.edu
  • Phone: 480-965-0127
  • Fax: 480-965-8544