Version 1.2
Lecture: Can mechanistic interpretability help us understand brain?

Recent studies show that LLMs can actually predict patterns of brain activity when people read or listen to language. Same findings have been discovered for Vision and Multimodal models.
But here’s the catch: we still don’t really know which parts of these models make them “brain-like.” Meanwhile, the field of Mechanistic Interpretability has made some advances in reverse-engineering LLMs to figure out how they process different tasks internally.
In my thesis, I broadly aim to explore how these two areas — cognitive neuroscience and interpretability — can benefit from each other. I identify circuits inside LLMs that are responsible for different NLP task - like Sentiment Analysis or Gender Agreement - and then see how/if manipulating these circuits changes model's alignment with different brain regions. If a causal relationship of these interventions and brain alignment is established, it could help us draw more fine-grained hypotheses about language processing in the brain and alignment with the models.
In this talk, I’ll give a quick tour of the field, share (very small!) results of my work, and talk about which steps I aim to try next.
Info
Day:
2025-11-14
Start time:
11:30
Duration:
00:30
Room:
M2.31
Track:
Computational Linguistics
Language:
en
Links:
Files
Feedback
Click here to let us know how you liked this event.
Concurrent Events
Speakers
| Nursulu Sagimbayeva |