Electrical, Computer, and Energy Engineering
Speech and Hearing Science
Arizona State University
(480) 727 - 6455
visar ((at)) asu ((dot)) edu
I joined Arizona State University in Fall 2013 as an Assistant Professor with a joint appointment in the School of Electrical Computer and Energy engineering and the department of Speech and Hearing Science. The overarching goal of research conducted in my lab is to develop and apply new machine learning and statistical signal processing tools to better understand and model signal perception. With a focus on speech, the goal is to develop reliable, data-driven models that can mimic aspects of human cognition. Some recent projects include developing non-parametric signal processing methods and using them to model pathological speech perception, developing auditory perception models based on psychoacoustics, and developing new machine learning tools for use with behavioral experiments.
Making progress against these goals requires an interdisciplinary approach that spans multiple fields. My contributions to this effort are centered on developing computational models guided by collected behavioral data. If you have any interest in these topics, please feel free to contact me!
Feel free to browse my publications for the latest developments and new results.
Listen to my NPR interview on our work.
Results from a paper we published in the Journal of Alzheimer's Disease were featured in the Science section of the New York Times.
Check out our new pre-print on arxiv on direct estimation of functionals of distributions
The NIH has funded an R01 to extend our objective model for the perception of dysarthric speech. This is joint work with Julie M. Liss.
The NIH has funded our R21 project on developing a computational model of conversational entrainment in clinical populations. The PI of the project is Stephanie Borrie at Utah State University.
Our group will present two papers at ICASSP 2017 this year. Stop by and say hi.
The Kern Center for the Science of Healthcare Delivery at Mayo Clinic has funded a new project on using speech changes as predictors of migraine onset.
Google has funded a new project in our lab to develop tools for passive monitoring of motoric abilities using speech and motion data.
We will be presenting two papers at the Interspeech 2016 conference this year: Modeling influence in group conversations and Accent classification using a combination of recurrent and deep neural networks.
Raytheon Co. has funded a new project in our lab to use information theory and statistical signal processing to reduce the complexity of deep networks.
Aural Analytics, a startup company I co-founded, was featured in a recent USA Today article on wearable technology.