We are offering a AI seminar series on NeuroAI and GenAI throughout 2024!
PMAG Nexus Series: Interpreting foundation models of the brain [RSVP]
Foundational Model in AI are models trained on vast amount of data enabling application across multiple domains. Brain Foundation Model is inspired by the same concept that trained on vast amount of… You guessed it! brain data!
Join us for a tour of the mind and explore the interesting aspects of our brain!
Inspired by recent breakthroughs in artificial intelligence, where foundation models—trained on vast quantities of data— have demonstrated remarkable capabilities and generalization, we developed a “foundation model” of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice.
— Towards a Foundation Model of the Mouse Visual Cortex
Speaker:
Prof. Xaq Pitkow, Assistant Professor, Electrical and Computer Engineering; Assistant Professor of Neuroscience, Baylor College of Medicine, Rice University; Associate Director NSF AI Institute for Artificial and Natural Intelligence.
Bio
Prof.Xaq Pitkow is a computational neuroscientist who develops mathematical theories of the brain and general principles of intelligent systems. He focuses on how distributed nonlinear neural computation uses statistical reasoning to guide action in naturalistic tasks.
Abstract:
We build state-of-the-art predictive models of visual responses in the mouse brain, exposing richer feature preferences than conventional models. We can then perform unlimited experiments on these models to find Most Exciting Inputs (MEIs). We show these MEIs back to the brain and find that, indeed, for most neurons they evoke greater responses than any other stimuli we tried. We call this method “inception” (after the movie of the same name) because it implants a desired response (or “idea") into the brain. We also identify ensembles of stimuli that all evoke high responses (Diverse Exciting Inputs, or DEIs), revealing invariances in neural tuning that we again validate in the brain. Analyzing these invariances we discover image features that are informative about features like object boundaries and relative depth, which are interpretable causal features of behavioral relevance. These approaches are examples of how we can discover how the brain might perform scene analysis using nonlinear combinations of sensory inputs that relate statistically to causal variables. I will discuss how analyses of the joint statistics of inputs, neural activity, and behavior together can help us understand behaviorally relevant neural computations.