Visar

Associate Professor

ASU Fulton Enterpreneurial Professor
Electrical, Computer, and Energy Engineering
Speech and Hearing Science
Arizona State University
(480) 727 - 6455
visar ((at)) asu ((dot)) edu

   

About Me

I joined Arizona State University in Fall 2013 with a joint appointment in the School of Electrical Computer and Energy engineering and the College of Health Solutions. At its core, my work is interdisciplinary and use-inspired; it lies at the intersection of engineering and human communication and is driven by a desire to use technology to improve the human condition. Our recent focus has been on understanding how various neurological conditions impact human behavior. With a focus on speech, we have developed in lab and translated to the clinic new machine learning and statistical signal processing tools for digitally capturing behavioral changes.

Making progress against these goals requires an interdisciplinary approach that spans multiple fields. Our contributions to this effort are centered on developing computational models guided by collected behavioral data. If you have any interest in these topics, please feel free to contact me!

Feel free to browse my publications for the latest developments and new results.

In the news

Listen to my NPR interview on our work.

Read our op-ed in the Wall Street Journal on changes in Muhammad Ali's speech .

Check out our editorial on Slate magazine on using the cell phone for early detection of neurological disease.

ESPN - Outside the lines did a story on our Interspeech 2017 paper on Muhammad Ali's speech changes as an early indicator of his Parkinson's syndrome diagnosis.

Our 2017 Brain and Language paper on declining language complexity in NFL players as a potential pre-clinical biomarker for CTE was featured in the Science section of the New York Times.

Results from a paper we published in the Journal of Alzheimer's Disease were featured in the Science section of the New York Times.

Recent News

We are organizing the first-ever Signal Analytics for Motor Speech workshop in Santa Barbara on Feb 19, 2020. This will be held the day before the Motor Speech Conference. We were expecting ~30 people. More than 120 people have already confirmed! We hope to see you there too.

Weizhi Li will present our paper on better regularization for neural networks at the AISTATS 2020 conference this year. Here is a link to the paper. This is joint work with Gautam Dasarathy.

The NIH has funded our project on using clinical and neuroimaging data to develop predictive models for persistent headache (joint work with Jing Li, Cat Chong, and Teresa Wu )

Come say hi to us at ICASSP. Our label is presenting work on hypernasality detection in speech, tremor detection in voice, and the reliability of word embeddings.

The NIH funded our project on objective measures of articulation and hypernasality in speech from children with cleft-lip/palate. This is work in collaboration with Profs. Nancy Scherer and Julie Liss.

Dr. Ming Tu succesfully defended his PhD and is now a Speech Research Scientist at JD Research. Congrats, Ming!

Our lab is presenting 3 papers at the 2018 Interspeech conference. Stop by and say hello.

Our paper with Liss and Dorman was awarded the 2016 Editor's Award from the Journal of Speech, Language, and Hearing Research.

Read our latest paper in the Journal of Headache and Pain on differences between post-traumatic headache and migraine . This was collaborative work with Mayo Clinic.

Berisha was selected as an ASU Fulton Entrepreneurial Professor (2017 - 2019).

We have a new collaboration with Boehringer Ingelheim to develop speech-based pre-clinical indicators of psychosis.

Berisha was selected as a 2017 Mayo Clinic Alliance Fellow!

Check out our new pre-print on arxiv on direct estimation of functionals of distributions

The NIH has funded an R01 to extend our objective model for the perception of dysarthric speech. This is joint work with Julie M. Liss.

The NIH has funded our R21 project on developing a computational model of conversational entrainment in clinical populations. The PI of the project is Stephanie Borrie at Utah State University.

Our group will present two papers at ICASSP 2017 this year. Stop by and say hi.

The Kern Center for the Science of Healthcare Delivery at Mayo Clinic has funded a new project on using speech changes as predictors of migraine onset.

Google has funded a new project in our lab to develop tools for passive monitoring of motoric abilities using speech and motion data.

We will be presenting two papers at the Interspeech 2016 conference this year: Modeling influence in group conversations and Accent classification using a combination of recurrent and deep neural networks.

Raytheon Co. has funded a new project in our lab to use information theory and statistical signal processing to reduce the complexity of deep networks.

Aural Analytics, a startup company I co-founded, was featured in a recent USA Today article on wearable technology.