Speech interfaces are becoming increasingly widespread. Yet, our spoken interactions with dialogue systems, smartphones, and other devices are not as natural as conversing with another human. Computers lack the ability to sense the affective cues that enable humans to converse efficiently.
I develop technologies for speech interfaces that analyze the acoustics and intonation of a person's speech to augment traditional speech recognition---essentially teaching computers to move beyond just the words a person says and incorporate other signals.
This work can be applied to a wide range of systems. I work primarily on educational technology applications such as adaptive, personalized systems for intelligent tutoring. Other applications include understanding user intent in virtual assistants (e.g., Siri), hands-free communication (e.g., when in a car), and communication with robots.
Interested in joining our Speech and Natural Language Processing Research Group? Please see my note to prospective students.
Nichola Lubold (PhD)
Arun Reddy Nelakurthi (MS/PhD)
Terrance Williams (Honors Thesis)
Office Hours: Wednesdays from 2:30-3:30 PM or by appointment.
Office Location: BYENG M1-39 is on the Mezzanine level of the Brickyard (directions).