Neural Circuit Theory - Sage Center and Mind and Machine Intelligence Co-Sponsored Lecture

Date: 
January 28, 2026
Location: 
Psychology 1312
David Heeger, New York University

Description

David J. Heeger is a Silver Professor and Professor of Psychology, Neural Science, and Data Science at New York University. His research spans an interdisciplinary cross-section of neuroscience, psychology, and AI. He was elected to the National Academy of Sciences in 2013. Heeger is Chief Scientific Officer of Statespace Labs, a tech startup that develops neuroscience-based video games to optimize human performance. He is co-founder and Chief Scientific Officer of Epistimic AI, that powers discovery in biomedical R&D. He is also co-founder and Chief Scientific Officer of The Sequel Institute, a commercial research lab developing technology based on neuroscience research.

 

Abstract:

         Neuroscience needs a theory of neural circuit function, like Maxwell's Equations for the brain, to predict outcomes precisely and quantitatively, and to enable the development of an applied science and engineering of neurotechnology. There is ample evidence that the brain relies on a set of canonical neural microcircuits and computations, repeating them across brain regions and modalities. In addition, a neural circuit theory has the potential to transform ML/AI. The human brain operates on 10-20 watts whereas state-of-the-art AI technology is headed toward relying on gigawatt power plants to train models, a huge disconnect that is ripe for disruption.
        I will describe such a theory called ORGaNICs (Oscillatory Recurrent Gated Neural Integrator Circuits). ORGaNICs are a class of recurrent neural networks (RNNs), analogous in some ways to Long Short Term Memory units (LSTMs) and Transformer models that are widely used in machine-learning applications, but with critical advantages. The ORGaNICs theory is a system of stochastic, nonlinear differential equations that express the responses of a population of neurons as dynamical processes that evolve over time in a recurrent circuit, commensurate with the highly recurrent structure of neural circuits in the brain. The theory is built on two a priori constraints: i) a hierarchy of brain areas with recurrent, feedforward, and feedback connections, and ii) recurrently implemented normalization within each cortical area. The equations of the theory follow from these two constraints. Emergent properties of the theory, some of which are derived analytically, recapitulate a wide range (and perhaps the full range) of brain functions in different neural systems and sensorimotor and cognitive processes.
        I will also report preliminary results from applying the theory to machine learning applications.

This lecture will take place in Psychology 1312 at 2 pm.