New Models of Human Hearing via Machine Learning

Date: 
May 26, 2022
Location: 
Psychology 1312
Josh McDermott, Massachusetts Institute of Technology

Description

Josh McDermott is Associate Professor in the Department of Brain and Cognitive Sciences and Director of the Laboratory for Computational Audition at the Massachusetts Institute of Technology. After finishing a BA in Brain and Cognitive Science at Harvard University, Dr. McDermott studied at the newly formed Gatsby Unit in London, where he earned an MPhil in Computational Neuroscience. He returned to the US for a PhD in Brain and Cognitive Science from MIT. He did postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU, and in 2013 he joined the Department of Brain and Cognitive Sciences at MIT. Dr. McDermott’s research investigates auditory phenomena, operating at the intersection of psychology, neuroscience, and engineering. His long-term goals are to understand how humans derive information from sound, to improve treatments for those whose hearing is impaired, and to enable the design of machine systems that mirror human abilities to interpret sound. His awards and honors include the NSF Career Award (2015), APAN Young Investigator Award (2017) and Troland Research Award from the National Academy of Sciences (2018).

Abstract

Humans derive an enormous amount of information about the world from sound. This talk will describe our recent efforts to leverage contemporary machine learning to build neural network models of our auditory abilities and their instantiation in the brain. Such models have enabled a qualitative step forward in our ability to account for real-world auditory behavior and illuminate function within the auditory cortex. But they also exhibit substantial discrepancies with human perceptual systems that we are currently trying to understand and eliminate.