Greta Tuckute

Email gretatu@mit.edu
GitHub gretatuckute
Scholar Greta Tuckute
Twitter @GretaTuckute
CV

Hi, I am Greta. Thank you for visiting my page. I am a PhD candidate at the Department of Brain and Cognitive Sciences at MIT working with Dr. Evelina Fedorenko. I completed my BSc and MSc degree in Molecular Biomedicine at KU/DTU (neuroscience/computer science focus) with coursework and research at MIT/CALTECH/Hokkaido University. I work in the intersection of neuroscience, artificial intelligence and cognitive science. I am really passionate about semantic processing and how representations learned by artificial systems compare to the ones learned by humans – specifically, in the domain of language. I also like thinking about approaches for neural control, geometric manifolds, memory representations, and high-bandwidth recordings from the human brain. When I don’t do science, I enjoy photography, high altitudes, mornings, writing toolboxes, and magic realism books.


Below is a subset of updates on ongoing projects and collaborations:

ANNs as Models of Language Processing in the Brain

October 2020 I gave a workshop talk at the Center for Cognitive and Behavioral Brain Imaging (CCBBI) at The Ohio State University on artifical neural networks as models of language processing. Part of the talk was based on the work by Schrimpf et al., 2020, while another part focused on the methodological considerations of comparing neural network models to brain representations. The talk can be found on OnNeuro.

Linguistic and conceptual processing are dissociated during sentence comprehension

September 2020 This work is a great collaboration with Cory Shain, Idan A. Blank, Mingye Wang, and Ev Fedorenko. The human mind stores a vast array of linguistic knowledge, including word meanings, word frequencies and co-occurrence patterns as well as syntactic constructions. These different kinds of knowledge have to be efficiently accessed during incremental language comprehension. In this work, we ask how dissociable are the memory stores and processing mechanisms of these different types of knowledge. Moreover, do different types of knowledge representations and processing rely on language-specific networks in the human brain, domain-general networks, or both? To address these questions, we used representational similarity analysis (RSA) to relate linguistic knowledge and processing and neural data.

I will be presenting this ongoing work (poster) at SNL 2020 in October. Poster Session: A, Board #: 29, Wednesday, October 21, 12:00 pm PDT.

Left panel: Methodology of brain to ANN comparisons. Right panel: Brain predictivity correlates with computational accounts of predictive processing (next-word prediction).

Artificial Neural Networks Accurately Predict Language Processing in the Brain

July 2020 This work is a great collaboration with Martin Schrimpf (lead), Idan A. Blank, Carina Kauf, Eghbal A. Hosseini, supervised by Nancy Kanwisher, Josh Tenenbaum and Ev Fedorenko. In the recent years, great progress has been made in modeling sensory systems with artificial neural networks (ANNs) to provide mechanistic accounts of brain processing. In this work, we investigate if we can exploit ANNs to inform us about higher level cognitive functions in the human brain – specifically, language processing. Here, we ask which language models best capture human neural (fMRI/ECoG) and behavioral responses. Moreover, we investigate how this links to computational accounts of predictive processing. Lastly, we examine the contributions of intrinsic model network architecture in brain predictivity. We tested 43 diverse state-of-the-art language models spanning a diverse set of embedding, recurrent, and transformer models. In brief, certain transformer families (GPT2) demonstrate consistent high predictivity across all neural datasets investigated. These models’ performance on neural data correlate with language modeling performance (next-word prediction) - but not other The General Language Understanding Evaluation (GLUE) benchmarks, suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. Lastly, model architecture alone (random weights, no training) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input, analogous to evolutionary-based optimization.

The pre-print can be found here: Schrimpf, M., Blank, I., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J., Fedorenko, E (2020): Artificial Neural Networks Accurately Predict Language Processing in the Brain, bioRxiv 2020.06.26.174482; doi: https://doi.org/10.1101/2020.06.26.174482.

Martin Schrimpf will also be presenting this work (slide) at SNL 2020 in October (SNL 2020 Merit Award Honorable Mention).