Visar

Assistant Professor

ASU Fulton Enterpreneurial Professor
Electrical, Computer, and Energy Engineering
Speech and Hearing Science
Arizona State University
(480) 727 - 6455
visar ((at)) asu ((dot)) edu

   

About Me

I joined Arizona State University in Fall 2013 as an Assistant Professor with a joint appointment in the School of Electrical Computer and Energy engineering and the department of Speech and Hearing Science. The overarching goal of research conducted in my lab is to develop and apply new machine learning and statistical signal processing tools to better understand and model signal perception. With a focus on speech, the goal is to develop reliable, data-driven models that can mimic aspects of human cognition. Some recent projects include developing non-parametric signal processing methods and using them to model pathological speech perception, developing auditory perception models based on psychoacoustics, and developing new machine learning tools for use with behavioral experiments.

Making progress against these goals requires an interdisciplinary approach that spans multiple fields. My contributions to this effort are centered on developing computational models guided by collected behavioral data. If you have any interest in these topics, please feel free to contact me!

Feel free to browse my publications for the latest developments and new results.

PhD and Postdoc Positions Available. Click to apply!

In the news

Listen to my NPR interview on our work.

Read our op-ed in the Wall Street Journal on changes in Muhammad Ali's speech .

Check out our editorial on Slate magazine on using the cell phone for early detection of neurological disease.

ESPN - Outside the lines did a story on our Interspeech 2017 paper on Muhammad Ali's speech changes as an early indicator of his Parkinson's syndrome diagnosis.

Our 2017 Brain and Language paper on declining language complexity in NFL players as a potential pre-clinical biomarker for CTE was featured in the Science section of the New York Times.

Results from a paper we published in the Journal of Alzheimer's Disease were featured in the Science section of the New York Times.

Recent News

Our paper with Liss and Dorman was awarded the 2016 Editor's Award from the Journal of Speech, Language, and Hearing Research.

Read our latest paper in the Journal of Headache and Pain on differences between post-traumatic headache and migraine . This was collaborative work with Mayo Clinic.

Berisha was selected as an ASU Fulton Entrepreneurial Professor (2017 - 2019).

We have a new collaboration with Boehringer Ingelheim to develop speech-based pre-clinical indicators of psychosis.

Berisha was selected as a 2017 Mayo Clinic Alliance Fellow!

Check out our new pre-print on arxiv on direct estimation of functionals of distributions

The NIH has funded an R01 to extend our objective model for the perception of dysarthric speech. This is joint work with Julie M. Liss.

The NIH has funded our R21 project on developing a computational model of conversational entrainment in clinical populations. The PI of the project is Stephanie Borrie at Utah State University.

Our group will present two papers at ICASSP 2017 this year. Stop by and say hi.

The Kern Center for the Science of Healthcare Delivery at Mayo Clinic has funded a new project on using speech changes as predictors of migraine onset.

Google has funded a new project in our lab to develop tools for passive monitoring of motoric abilities using speech and motion data.

We will be presenting two papers at the Interspeech 2016 conference this year: Modeling influence in group conversations and Accent classification using a combination of recurrent and deep neural networks.

Raytheon Co. has funded a new project in our lab to use information theory and statistical signal processing to reduce the complexity of deep networks.

Aural Analytics, a startup company I co-founded, was featured in a recent USA Today article on wearable technology.