Research Directions

My work is focused on understanding how language is processed in biological hardware (the human brain), how the representations and algorithms compare to those learned by artificial systems (neural network language models), and to develop biologically plausible, principled artificial systems that perform linguistic computations more like humans do. Please see how my work fits into these three broad categories below:

Investigating language processing in humans

This line of work investigates the cognitive and neural mechanisms that enable humans to comprehend and produce language. Some of these studies leverage traditional methods from cognitive science and neurosciene, while other studies leverage artificial neural network models to ask questions about language that were previously out of reach.

  • Tuckute, G., Sathe, A., Srikant, S., Taliaferro, M., Wang, M., Schrimpf, M., Kay, K., Fedorenko, E. (2024): Driving and suppressing the human language network using large language models, Nature Human Behavior 8, doi: https://doi.org/10.1038/s41562-023-01783-7.

  • Tuckute*, G., Lee*, E.J., Sathe, A., Fedorenko, E. (2024): A 3.5-minute-long reading-based fMRI localizer for the language network, bioRxiv, doi: https://doi.org/10.1101/2024.07.02.601683.

  • Tuckute, G., Paunov, A., Kean, H., Small, H., Mineroff, Z., Blank, I.A., Fedorenko, E. (2022): Frontal language areas do not emerge in the absence of temporal language areas: A case study of an individual born without a left temporal lobe, Neuropsychologia 169, doi: https://doi.org/10.1016/j.neuropsychologia.2022.108184.

  • Tuckute, G.*, Mahowald*, K., Isola, P., Oliva, A., Gibson, E., Fedorenko, E. (2022): Intrinsically memorable words have unique associations with their meanings, PsyArXiv, doi: https://doi.org/10.31234/osf.io/p6kv9.

  • Lipkin, B., Tuckute, G., Affourtit, J., Small, H., Mineroff, Z., Kean, H., Jouravlev, O., Rakocevic, L., Pritchett, B., Siegelman, M., Hoeflin, C., Pongos, A., Blank, I.A., Struhl, M.K., Ivanova, A., Shannon, S., Sathe, A., Hoffmann, M., Nieto-Castañón, A., Fedorenko, E. (2022): Probabilistic atlas for the language network based on precision fMRI data from >800 individuals, Scientific Data 9(1), doi: https://doi.org/10.1038/s41597-022-01645-3.

  • Investigating how language representations and computations in humans compare to those in artificial models

    This line of work investigates the extent to which we, humans, share representations and computational principles with these artificial neural network models, despite them having emerged in completely different ways.

  • Tuckute, G., Kanwisher, N., Fedorenko, E. (2024): Language in Brains, Minds, and Machines, Annual Review of Neuroscience 47, doi: https://doi.org/10.1146/annurev-neuro-120623-101142.

  • AlKhamissi, B., Tuckute, G., Bosselut^, A., Schrimpf^, M. (2024): Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network, arXiv; doi: https://doi.org/10.48550/arXiv.2406.15109.

  • Tuckute, G., Finzi, D., Margalit, E., Zylberberg, J., Chung, SY., Fyshe, A., Fedorenko, E., Kriegeskorte, N., Yates, J., Grill-Spector, K., Kar, K. (2024): How to optimize neuroscience data utilization and experiment design for advancing primate visual and linguistic brain models?, arXiv; doi: https://doi.org/10.48550/arXiv.2401.03376.

  • Tucker*, M., & Tuckute*, G. (2023): Increasing Brain-LLM Alignment via Information-Theoretic Compression, 37th Conference on Neural Information Processing Systems (NeurIPS 2023), UniReps Workshop, url: https://openreview.net/forum?id=WcfVyzzJOS.

  • Tuckute, G.*, Feather*, J., Boebinger, D., McDermott, J. (2023): Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions, PLoS Biology 21(12), doi: https://doi.org/10.1371/journal.pbio.3002366.

  • Kauf*, C., Tuckute*, G., Levy, R., Andreas, J., Fedorenko, E. (2023): Lexical semantic content, not syntactic structure, is the main contributor to ANN-brain similarity of fMRI responses in the language network, Neurobiology of Language 5(1), doi: https://doi.org/10.1162/nol_a_00116.

  • Schrimpf, M., Blank, I.*, Tuckute, G.*, Kauf, C.*, Hosseini, E. A., Kanwisher, N., Tenenbaum^, J., Fedorenko^, E. (2021): The neural architecture of language: Integrative modeling converges on predictive processing, PNAS 118(45), doi: https://doi.org/10.1073/pnas.2105646118.

  • Development of biologically plausible, principled artificial models that perform linguistic computations more like humans

    This line of work uses insights from human language processing to develop more biologically plausible neural network models and to develop tools to compare representations or outputs from humans and neural networks (note that this is a relatively new direction, so most of these projects are in preparation / under review).

  • BinHuraib, T., Tuckute, G., Blauch, Nicholas M. (2024): Topoformer: brain-like topographic organization in Transformer language models through spatial querying and reweighting, International Conference on Learning Representations (ICLR 2024), Re-Align Workshop, url: https://openreview.net/forum?id=3pLMzgoZSA.

  • Wolf, L., Tuckute, G. , Kotar, K., Hosseini, E., Regev, E., Wilcox, E., Warstadt, A. (2023): WhisBERT: Multimodal Text-Audio Language Modeling on 100M Words, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, CoNLL-CMCL Shared Task BabyLM Challenge, Empirical Methods in Natural Language Processing (EMNLP), doi: https://doi.org/10.48550/arXiv.2312.02931.

  • Tuckute, G.*, Sathe*, A., Wang, M., Yoder, H., Shain, C., Fedorenko, E. (2022): SentSpace: Large-scale benchmarking and evaluation of text using cognitively motivated lexical, syntactic, and semantic features, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, pages 99–113. Association for Computational Linguistics (ACL), url: https://aclanthology.org/2022.naacl-demo.11.
  • Earlier Research

    I got introduced to the field of neuroscience and artificial intelligence through the domain of vision, where I worked on decoding semantic features from EEG signatures (Tuckute et al., 2019: Single Trial Decoding of Scalp EEG Under Natural Conditions) and decoding attentional states using real-time EEG neurofeedback (Tuckute et al., 2021: Real-Time Decoding of Attentional States Using Closed-Loop EEG Neurofeedback).

    Before that—in late high-school—I was fascinated by quantum physics and I did one project on quantum tunneling in Bose-Einstein condensates, and another project on sequential storage and readout of laser light in a diamond for quantum relays (supervised by Dr. Jacob Broe and Dr. Klaus Moelmer), and I was a finalist in two national research competitions: “The Junior Researcher's Project” by University of Copenhagen (December 2012), and “Young Researchers” competition by Danish Science Factory (April 2013).