Greta Tuckute

Email gretatu@mit.edu
GitHub gretatuckute
Scholar Greta Tuckute
Twitter @GretaTuckute
CV

Hi, I am Greta. Thank you for visiting my page. I am a PhD candidate at the Department of Brain and Cognitive Sciences at MIT working with Dr. Ev Fedorenko. I am very grateful to be supported by the Amazon Alexa Fellowship (from the Science Hub, administered by the MIT Schwarzman College of Computing) and the AAUW International Doctorate Fellowship. I completed my BSc and MSc degrees at KU/DTU with coursework and research at MIT/CALTECH/Hokkaido University. I work in the intersection of neuroscience, artificial intelligence (AI) and cognitive science. I am interested in how language is processed in the biological brain, and how representations learned by artificial systems compare to the ones learned by humans. I am also interested in how we can leverage insights about the brain within the field of AI. When I don’t do science, I enjoy photography, tennis, high altitudes, mornings, and magic realism books.


Below is a subset of updates on ongoing/recently finished projects and collaborations:

The scatter plot shows that DNN models where a speech-related task was part of the training regime matches cortical speech-selective responses better than networks not trained on speech tasks.

Conference on Cognitive Computational Neuroscience 2022

Preprint on deep neural network models of the auditory system is out

September 2022 We released the preprint of our work on how deep neural networks (DNNs) for audio can account for brain responses in the human auditory cortex. This project is co-led with Jenelle Feather, and in collaboration with Dana Boebinger and Josh McDermott.

We evaluated brain-model correspondence for 19 DNNs (9 publicly available models, 10 models trained by us spanning four tasks) on two fMRI datasets (n=8, n=20) and using two different evaluation metrics (via regression and representational similarity analysis, RSA). We make the following four main claims: 1) Most DNN models (but not all!) outperformed traditional models of the auditory cortex. Results were highly consistent between datasets and evaluation metrics. The overall best DNN model was trained on multiple tasks (word, speaker, environmental sound recognition). 2) This brain-DNN similarity was strictly dependent on task-optimization. DNNs with permuted weights (which destroys the structure learned during model training) performed below the baseline model. 3) Most DNNs exhibited systematic correspondence with the hierarchical organization of the auditory cortex, with earlier DNN stages best matching primary auditory cortex and later stages best matching non-primary cortex. This was not true for permuted networks. 4) The task a DNN model is trained on influences its match to the brain, with e.g., speech-trained models best matching cortical speech responses (scatter plot on the right). 5) Finally, in light of recent discussion suggesting that the dimensionality of a model’s representation correlates with regression-based brain predictions, we evaluated how the effective dimensionality (ED) of each network stage correlated with both the regression and RSA metrics. There was a modest correlation between ED and brain-model similarity but significantly less than that between the two datasets or the two similarity measures. Thus ED does not seem to explain most of the variance across DNNs in our datasets.
Overall, we demonstrate that many, but not all, DNN models account for responses in the human auditory cortex with hierarchical stage-region correspondence, and provide some hints of how to improve brain-model matches for future models.

The preprint can be found here: Tuckute, G.*, Feather*, J., Boebinger, D., McDermott, J.: (2022). Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence, bioRxiv2022.09.06.506680; doi: https://doi.org/10.1101/2022.09.06.506680.

The LanA Language Atlas is published

August 2022 Our probabilistic language atlas, LanA, is now published in Nature Scientific Data and can be openly accessed here! We also have a website http://evlabwebapps.mit.edu/langatlas/ that contains easy access to data download, visualizations, and additional information.
In brief, the LanA language atlas provides the probability that any location in the brain (volume/surface) is language-selective. The atlas was derived from >800 individuals based on functional localization (a contrast between processing of sentences and a linguistically/acoustically degraded condition, such as non-word strings).

Citation: Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan Blank, Melissa Kline Shruhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, Evelina Fedorenko (2022): LanA (Language Atlas): Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Sci Data 9, 529; doi: https://doi.org/10.1038/s41597-022-01645-3lana7.

The coordinate system demonstrates some of the adversarial axes that we will focus on during our GAC workshop. Each dot is a speaker's opinion on a set of questions, e.g., to which extent we are still in the dark ages of neuroscience and more work needs to be conducted before we start collecting data at a grain leveraged for building artificial neural networks of brain activity and behavior.

Conference on Cognitive Computational Neuroscience 2022

August 2022 Excited to take part in the Conference on Cognitive Computational Neuroscience (CCN) this year where I am co-organizing a Generative Adversarial Collaboration (GAC) as well as presenting a poster.

The GAC workshop takes place Friday August 26 (1.30-4.15pm PT) and aims to tackle how we can optimally use neuroscience data to guide the next generation of brain models. Current use of data is often limited to post-hoc model evaluation or vague ‘inspirations’ for model development. Here, we ask: Can we use neuroscience data more efficiently for model development? Is it even the right time in neuroscience to do this? How much data is enough? What type of data should we collect?
The GAC team (and speakers) include Ko Kar (York University, MIT), Joel Zylberberg (York University), SueYeon Chung (NYU), Alona Fyshe (University of Alberta), Ev Fedorenko (MIT), Konrad Kording (University of Pennsylvania), Nikolaus Kriegeskorte (Columbia University), Jacob Yates (UC Berkeley), and Kalanit Grill-Spector (Stanford University).
I will be giving a talk on how to optimize data collection for model development within language. Specifically, I will try to answer why many existing neuroscience datasets within language are not ideal for model development – and I will provide ideas for ways forward.

I will be presenting a poster on Friday August 26 (7.30-9.30pm PT) and our work (with Jenelle Feather*, Dana Boebinger, and Josh McDermott) is on how several auditory networks with diverse architectures trained for diverse tasks capture human brain responses to natural sounds. The poster will focus on how robust our findings are to the model evaluation metric of interest (regression versus representational similarity analysis) as well as how our findings might be affected by latent variables such as effective dimensionality of network activations.

We investigated why certain words are more memorable than others. For instance, number of synonyms (x-axis) correlates negatively with the word recognition performance of a word (y-axis): words with many synonyms are more forgettable, possibly because any of the synonyms of a word could have generated the relevant meaning in semantic memory (see the preprint for 12 other predictors).

Intrinsically memorable words have unique associations with their meanings

July 2022 This project is the result of a big joint effort with Kyle Mahowald (co-lead), Phillip Isola, Aude Oliva, Edward Gibson, and Ev Fedorenko.

PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN

Some of these words are consistently remembered better than others. Why is that? In this project, we provide a simple Bayesian account and show that it explains >80% of variance in word memorability.
Building on past work that suggested that words are encoded by their meanings, we hypothesize that words that uniquely pick out a meaning in semantic memory (i.e., unambiguous words with no/few synonyms) are more memorable. We evaluated our account in two behavioral experiments (each with >600 participants and 2,222 target words), similar to past work on image memorability. Participants viewed a sequence of words and pressed a button whenever they encountered a repeat (critical memory repeats occurred 91-109 words apart).
Key findings: 1) Words are as memorable as images. In our experiments, the hit rate was ~68% and the false alarm rate was ~10% which is on par with images (e.g., Isola et al., 2011 CVPR). There does not appear to be a memory advantage for images compared to words. 2) Certain words are consistently remembered better than others across participants – so although individuals differ in their exposure to the amount and kinds of linguistic information across their lifetimes, memorability is largely an intrinsic word property. 3) Critically, most memorable words have a one-to-one relationship with their meaning (such as PINEAPPLE or AVALANCHE). They uniquely pick out a particular meaning in semantic memory, in contrast to ambiguous words (e.g., LIGHT which could mean a fixture in a house, the opposite of heavy, cigarette lighter, etc.) or words with many synonyms (e.g., HAPPY with synonyms CHEERFUL, JOYFUL, GLAD, etc.). Number of synonyms was a more important predictor than number of meanings.
Given that our critical predictors (number of synonyms and meanings) can be estimated from language corpora, this simple account provides a scalable model that can make predictions about memorability of newly encountered words in any language where large corpora are available. Memorability can be used to answer cool questions about how the mind and brain prioritizes and organizes information during semantic memory encoding. Understanding which words lead to longer-lasting memory traces can be leveraged to enable more effective information sharing.

The preprint can be found here: Tuckute, G.*, Mahowald*, K., Isola, P., Oliva, A., Gibson, E., Fedorenko, E. (2022). Intrinsically memorable words have unique associations with their meanings, PsyArXiv, doi: https://doi.org/10.31234/osf.io/p6kv9. (This is a revival of a project that got started back in 2011 and we are excited to share a new and improved version of the manuscript (along with the data and analysis scripts).

We present SentSpace: a framework for streamlined evaluation of text using cognitively motivated linguistic features. This enables to compare text from e.g., artificial language models and humans, as demonstrated above.

SentSpace: Large-scale benchmarking and evaluation of text using cognitively motivated lexical, syntactic, and semantic features

June 2022 SentSpace would not exist without Aalok Sathe* (co-lead), Mingye (Christina) Wang, Harley Yoder, Cory Shain and Ev Fedorenko.
Image that you want to quantify a sentence using a large set of interpretable features. Maybe you are interested in obtaining features that relate to the sentiment of the sentence, maybe features that are known to cause language processing difficulty (such as frequency or age of acquisition). With SentSpace, we introduce such system: we enable streamlined evaluation of any textual input. SentSpace characterizes textual input using diverse lexical, syntactic, and semantic features derived from corpora and psycholinguistic experiments. These features fall into two main domains (sentence spaces, hence the name): lexical and contextual. Lexical features operate on individual lexical items (words) and entail features such as concreteness, age of acquisition, lexical decision latency, and contextual diversity. As several properties of a sentence cannot be attributed to individual words, so the contextual module quantifies a sentence as a whole. This module entails features such as syntactic storage and integration cost, center embedding depth, and sentiment. Hence, SentSpace provides an interpretable sentence embedding with features that have been to shown to affect language processing.
SentSpace allows for quantification and comparison of different types of text and can be useful for answering questions like: How does text generated by an artificial language model compare to that of humans? How does utterances produced by neurotypicals compare to that of individuals with communication disorders? What psycholinguistic information do high-dimensional vector representations from artificial language models capture?

Aalok and I will be demonstrating the current (first!) version of SentSpace at NAACL 2022 in Seattle July 10-15 (System Demonstration poster session July 12). We would love feedback, so please don't hesitate to reach out! The proceedings paper can be found here: Tuckute, G.*, Sathe, A., Wang, M., Yoder, H., Shain, C., and Fedorenko, E (2022). SentSpace: Large-Scale Benchmarking and Evaluation of Text using Cognitively Motivated Lexical, Syntactic, and Semantic Features. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations. Association for Computational Linguistics.
The SentSpace Python package can be accessed at sentspace.github.io/sentspace and the hosted frontend website at sentspace.github.io/hosted.

A surface map of the probabilistic language atlas (LanA). Lighter color indicates higher probability of that part of the brain being language-selective. Evidently, the language networks falls (mostly) within the left hemisphere frontal and temporal lobes.

LanA (Language Atlas): A probabilistic atlas for the language network based on fMRI data from >800 individuals

March 2022 This work is a massive effort (14yrs of data collection!) in collaboration with Benjamin Lipkin (lead), Ev Fedorenko, and a bunch of brilliant current/former lab members of EvLab.
Given any location in the brain, what is the probability of that particular location being selective to language? We present a probabilistic language atlas (LanA) that allows to answer exactly this question. For any 3d pixel (voxel/vertex) in the volumetric or surface brain coordinate spaces, how likely is that pixel to fall within the language network? The atlas was obtained from >800 individuals based on functional localization (a contrast between processing of sentences and a linguistically/acoustically degraded condition, such as non-word strings). Thus, among these ~800 individuals, we provide a group average map that allows to quantify and visualize where ‘the average’ language network resides.
Examples of use cases of LanA are: 1) A common reference frame for analyzing group-level activation peaks from past/future fMRI studies, 2) Lesion locations in individual brains, 3) Electrode location in intracranial ECoG/SEEG investigations, 4) Functional mapping in during brain surgery when fMRI is not possible, and others (please see paper introduction). The atlas will be made publicly available (along with individual contrast/significance maps and demographic data) by publication.

The preprint can be found here: Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan Blank, Melissa Kline Shruhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, Evelina Fedorenko (2022): LanA (Language Atlas): A probabilistic atlas for the language network based on fMRI data from >800 individuals. bioRxiv2022.03.06.483177; doi: https://doi.org/10.1101/2022.03.06.483177.

A surface map showing which layer of VGGish (model trained for environmental sound classification) best predicts each vertex of the surface (aggregated over n=20 participants). Earlier layers of the model in green, later layers in red. The label shows a label of the primary auditory cortex. A relationship between the model layer hiearchy and the auditory cortex is present.

Hierarchical layer-region correspondence of deep neural networks for audition

February 2022 This work is a great collaboration with Jenelle Feather*, Dana Boebinger, and Josh McDermott.
An overarching aim of neuroscience is to build quantitatively accurate computational models of sensory systems. Deep neural networks provide such candidate models. To consider these neural networks as serious candidate models, they must at least 1) Perform a task that is relevant to the real world, 2) Be predictive of brain data, and 3) Be mappable (meaning that earlier layers of the network map onto earlier parts of the cortical hierarchy in the brain, and later layers onto later parts).
Such models are relatively well explored within vision (convolutional neural networks trained for image classification) (e.g., Yamins et al., 2014), but less explored in audition. Kell et al. (2018) showed that a particular neural network architecture was predictive of brain responses and had a degree of correspondence between model stages and brain regions. However, it is unclear whether these results generalize to other neural network models. In our work, we evaluated brain-model correspondence for publicly available audio neural network models along with in-house models trained on five different tasks. We used two independent datasets (Norman-Haignere et al., 2015, n=8; Boebinger et al., 2021, n=20) of participants listening to natural sounds in the fMRI scanner. Most tested models were more predictive of brain responses than traditional spectrotemporal models of auditory cortex, and exhibited a nice relationship between the model layer hierarchy and the cortical hierarchy in the human brain. However, this was not true for all tested models: not all state-of-the-art models were either predictive or mappable. This work helps us understand which parameters are necessary to yield a computationally accurate model of the human auditory cortex and substantiates our knowledge of the hierarchical organization of the auditory cortex.

I will be discussing these findings and other aspects of the work at Cosyne 2022 in Lisbon, Portugal, March 17-20 (poster session 2).

The neural architecture of language: Integrative modeling converges on predictive processing

November 2021 Our paper on artificial neural networks (ANNs) as models of language comprehension is now out in PNAS and it received some nice coverage, for instance by Scientific American. I want to emphasize two points from this paper: 1) We show that better-performing language models (based on next-word prediction) also match the brain better. Critically, we did not evidence this link with performance on other linguistic benchmarks (GLUE), suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. 2) Model architecture alone (initialization weights) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input.
I think these two points open up for multiple exciting research questions: Given that better-performing models are more brain-like, how can we engineer more brain-inspired models? Most state-of-the-art language models are inefficient (requiring billions of parameters and training samples resulting in massive energy expenditure), not robust (can be fooled by adversarial input), and not very interpretable (making it challenging to localize causes of success/unwanted capabilities). How can we exploit principles from the human brain that allows us to processes language efficiently and robustly? Can we modularize or constrain language model representations using human data? In which scenarios do interpretability and performance go hand in hand? Lastly, which human and ANN benchmarks could be most meaningful to evaluate some of the aforementioned questions?

The paper can be found here: Schrimpf, M., Blank, I.*, Tuckute, G.*, Kauf, C.*, Hosseini, E. A., Kanwisher, N., Tenenbaum^, J., Fedorenko^, E (2021): The neural architecture of language: Integrative modeling converges on predictive processing, PNAS Vol. 118, Issue 45; doi: https://doi.org/10.1073/pnas.2105646118.

Can we use transformer models to drive language regions in the brain?

July 2021 I gave an informal 'poster' presentation at the Boston/Cambridge CogSci 2021 meet-up on exploiting transformer language models to drive regions in the human brain. I presented ideas and preliminary data on whether and how that is feasible, and if so, what we can learn from it. Thanks for the great discussions! This is ongoing work with Mingye Wang, Elizabeth Lee, Martin Schrimpf, Noga Zaslavsky, and Ev Fedorenko. More soon!

We investigated a woman living without her left temporal lobe, most likely as a result of pre/perinatal stroke.

Frontal language areas do not emerge in the absence of temporal language areas

May 2021 This work is a joint effort and brilliant collaboration with Alexander Paunov, Hope Kean, Hannah Small, Zachary Mineroff, Idan Blank, and Ev Fedorenko. High-level language processing is supported by a left-lateralized fronto-temporal brain network. In this work, we investigated whether frontal language areas emerge in the absence of temporal language areas. To do so, we examined language processing in the brain of an individual (EG) born without a left temporal lobe. We used fMRI methods to establish that the right hemisphere language network is similar to the left hemisphere language network in controls. However, the critical question was whether EG’s intact left lateral frontal lobe contained language-responsive areas. We found no reliable response to language in EG’s intact left frontal lobe, suggesting that the existence of temporal language areas appears to be a prerequisite for the emergence of language areas in the frontal lobe.

The paper can be found here: Tuckute, G., Paunov, A., Kean, H., Small, H., Mineroff, Z., Blank, I., and Fedorenko, E. (2021): Frontal language areas do not emerge in the absence of temporal language areas: A case study of an individual born without a left temporal lobe, bioRxiv 2021.05.28.446230; doi: https://doi.org/10.1101/2021.05.28.446230.

We link behavioral task performance to neural EEG states (effect only significant in the neurofeedback group and not controls).

Real-time decoding of visual attention using closed-loop EEG neurofeedback

March 2021 Happy to share that my MSc thesis work from DTU is now published (with Sofie T. Hansen, Troels W. Kjaer and Lars K. Hansen).
Neurofeedback is a powerful tool for linking neural states to behavior. In this project, we asked i) Whether we can decode covert states of visual attention using a closed-loop EEG system, and ii) If a single neurofeedback training session can improve sustained attention abilities. We implemented an attention training paradigm designed by DeBettencourt et al., (2015) in EEG. In a double-blinded design, we trained twenty-two participants on the attention paradigm within a single neurofeedback session with behavioral pretraining and posttraining sessions.
We demonstrate that we are able to decode covert visual attention in real time. First of all, we report a mean classifier decoding error rate of 34.3% (chance = 50%). Second, we link this decoding performance to behavioral states, and show that within the neurofeedback group, there was a greater level of task-relevant attentional information decoded in the participant's brain before making a correct behavioral response than before an incorrect response (not evident in the control group; interaction p=7.23e−4). This indicates that we were able to achieve a meaningful measure of subjective attentional state in real time and control participants' behavior during the neurofeedback session. Lastly, we do not provide conclusive evidence whether a single neurofeedback session per se provided lasting effects in sustained attention abilities.

The paper can be found here: Tuckute, G., Hansen, S.T., Kjaer, Troels W., Hansen, L. K. (2021): Real-Time Decoding of Attentional States Using Closed-Loop EEG Neurofeedback, Neural Computation Vol. 33, Issue 4; doi: https://doi.org/10.1162/neco_a_01363.
A video of the neurofeedback system is available at here. The code and sample data for the neurofeedback framework are available on GitHub.

Correlation of connectivity among brain networks and phantom limb sensation. We show that individuals with a low degree of phantom sensation (i.e. low neuroprosthetic controllability) have a strong connectivity among visual and sensorimotor networks, possibly as a compensatory mechanism.

Biological closed-loop feedback preserves proprioceptive sensorimotor signaling

December 2020 This work is a great collaboration with Shriya Srinivasan (lead), Jasmine Zou, Samantha Gutierrez-Arango, Hyungeun Song, Robert L. Barry, and Hugh Herr.
The brain undergoes marked changes in function after limb loss and amputation. In this work, we investigate individuals with a traditional lower limb amputation, no amputation and a novel amputation procedure that preserves physiological central-peripheral signaling mechanisms. We demonstrate that the proprioceptive signaling enabled by the novel amputation procedure restores sensorimotor feedback in the brain. We investigate changes in functional connectivity in the brain, and show that the lack of proprioceptive feedback results in a strong coupling between visual and sensorimotor networks. This suggests a heavy reliance on visual information when no sensory feedback is available, possibly as a compensatory mechanism. Conclusively, we demonstrate that closed-loop proprioceptive feedback can enable desired neuroplastic changes toward improved neuroprosthetic capability.

The paper can be found here: Srinivasan, S. S., Tuckute, G., Zou, J., Gutierrez-Arango, S., Song, H., Barry, R. L., Herr, H (2020): AMI Amputation Preserves Proprioceptive Sensorimotor Neurophysiology, Science Translational Medicine, Vol. 12, Issue 573, doi: 10.1126/scitranslmed.abc5926.

ANNs as models of language processing in the brain

October 2020 I gave a workshop talk at the Center for Cognitive and Behavioral Brain Imaging (CCBBI) at The Ohio State University on artifical neural networks as models of language processing. Part of the talk was based on the work by Schrimpf et al., 2020, while another part focused on the methodological considerations of comparing neural network models to brain representations. The talk can be found on OnNeuro.

Linguistic and Conceptual Processing are Dissociated During Sentence Comprehension

September 2020 This work is a great collaboration with Cory Shain, Idan A. Blank, Mingye Wang, and Ev Fedorenko.
The human mind stores a vast array of linguistic knowledge, including word meanings, word frequencies and co-occurrence patterns as well as syntactic constructions. These different kinds of knowledge have to be efficiently accessed during incremental language comprehension. In this work, we ask how dissociable are the memory stores and processing mechanisms of these different types of knowledge. Moreover, do different types of knowledge representations and processing rely on language-specific networks in the human brain, domain-general networks, or both? To address these questions, we used representational similarity analysis (RSA) to relate linguistic knowledge and processing and neural data.

I will be presenting this ongoing work (poster) at SNL 2020 in October. Poster Session: A, Board #: 29, Wednesday, October 21, 12:00 pm PDT.

Left panel: Methodology of brain to ANN comparisons. Right panel: Brain predictivity correlates with computational accounts of predictive processing (next-word prediction).

Artificial neural networks accurately predict language processing in the brain

July 2020 This work is a great collaboration with Martin Schrimpf (lead), Idan A. Blank, Carina Kauf, Eghbal A. Hosseini, supervised by Nancy Kanwisher, Josh Tenenbaum and Ev Fedorenko.
In the recent years, great progress has been made in modeling sensory systems with artificial neural networks (ANNs) to provide mechanistic accounts of brain processing. In this work, we investigate if we can exploit ANNs to inform us about higher level cognitive functions in the human brain – specifically, language processing. Here, we ask which language models best capture human neural (fMRI/ECoG) and behavioral responses. Moreover, we investigate how this links to computational accounts of predictive processing. Lastly, we examine the contributions of intrinsic model network architecture in brain predictivity. We tested 43 diverse state-of-the-art language models spanning a diverse set of embedding, recurrent, and transformer models. In brief, certain transformer families (GPT2) demonstrate consistent high predictivity across all neural datasets investigated. These models’ performance on neural data correlate with language modeling performance (next-word prediction) - but not other The General Language Understanding Evaluation (GLUE) benchmarks, suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. Lastly, model architecture alone (random weights, no training) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input, analogous to evolutionary-based optimization.

The pre-print can be found here: Schrimpf, M., Blank, I., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J., Fedorenko, E (2020): Artificial Neural Networks Accurately Predict Language Processing in the Brain, bioRxiv 2020.06.26.174482; doi: https://doi.org/10.1101/2020.06.26.174482.

Martin Schrimpf will also be presenting this work (slide) at SNL 2020 in October (SNL 2020 Merit Award Honorable Mention).