top of page

Lost in Translation

by Sloane Goldberg

art by Jasmine Zhang

Why does my brain draw on Spanish words when I'm trying to speak Hebrew? Why does my father remember the Japanese he learned in college instead of the Italian he's learning now? After the third time I answered a question in Hebrew class with the Spanish “sí,” I realized something must be happening in my brain. It wasn’t that I forgot the Hebrew word for yes (it’s “ken,” if you’re wondering), or that I didn’t understand the question. What happened was that when I opened my mouth to speak in Hebrew, a mixture of the Italian and Spanish that I had been learning for the past fifteen years came out instead. The same thing had to be happening to my father, who often ended Italian sentences with the Japanese “onegaishimasu,” meaning “please,” despite it bearing no resemblance to the Italian “per favore.” Something strange was happening to both of us within our brains’ neural pathways, conflating languages and jumbling our speech.

To answer this vague question of “what is happening in my brain,” we must first investigate how humans learn language. The process of second and third language acquisition later in life is notably different from the language or languages we learn as children [1]. But why? The main reason is that babies are born with an innate ability to identify phonemes, which are the units of sound that make up all languages [1]. In English, “th” “sh” or “l” are all examples of phonemes. Part of what allows someone to speak a language with fluency is lack of an accent– that is, to produce the phonemes of a language the way a native speaker would [1]. Sounds like the rolled letter “r” in Spanish, the scratchy “h” produced in the back of the throat in Hebrew, or the aspirated consonants in English (think the aspirated “p” in “spit” versus the unaspirated “p” in “present”), are all examples of phonemes that are harder to learn later in life.

After six months, babies begin to engage in a natural process called synaptic pruning, where weaker synapses, or junctions between nerve cells, are removed and the resources are used to strengthen more frequently used synapses [2]. The process, which continues into adulthood, is evolutionarily favorable; it increases the brain’s capacity to carry information, and more specifically, carry relevant information [2]. For example, a baby born in the United States does not need the brain capacity to differentiate the tones of Mandarin but instead the brain capacity to learn and remember English words, so to prune unused or less used synapses makes sense. However, this process is non-conducive to second language learning later in life. By one year of age, infants have a greater ability to identify the phonemes that make up their own language, but have now lost much of their ability to recognize phonemes that haven’t surrounded them for their first year of life [1]. This phenomena is present in babies from both bilingual and monolingual households; there is an increased ability to identify the phonemes from languages spoken around them in their first few months of life, but a decreased ability to identify other, or foreign, phonemes. This process is called transitioning from a “citizen of the world” to a “native language specialist” [1].

This theory that babies have the potential to recognize any sound spoken around them was strengthened in a study conducted in 2003 [3]. Researchers exposed one group of 9-month-olds from English-speaking households to a tutor speaking Mandarin Chinese, while the other half were only exposed to tutors speaking English. After 12 sessions, the group exposed to the Mandarin tutor showed an increased ability to differentiate between English and Mandarin sounds when compared to the control group exposed only to English-speaking tutors [3].

But what does language learning look like after this critical age window has passed? Perhaps one of the most interesting things about second language acquisition is that it depends so heavily on the structure of the first language one acquires [4]. This theory, called cross-linguistic influences, states that languages aren’t easy or hard to learn because of their innate sentence structure. Instead, their difficulty stems from how the learner’s second language (L2) compares to the learner's first or native language (L1). Linguists disagree on how to properly categorize how “difficult” a language is; looking at the alphabet used, the length of the words, the tones or lack thereof, or the grammatical cases only tells a fraction of the story. The only constant in language complexity is that the perceived “difficulty” depends heavily on the L1 in question [4]. One 2019 study analyzed languages by morphological complexity in order to determine their relative difficulty [5]. Morphological complexity is determined by looking at the amount of building blocks a language uses to compose meaningful words. A word like “cat” has one morpheme, while a word like “unhappiness” has three morphemes: the prefix “un” meaning not, the root word “happy” and the suffix “ness [4]. The researchers found that learners whose L1 was morphologically simpler than L2 had a harder time learning the new language [5]. Specifically, when participants learned Dutch, a morphologically complex language, those who spoke languages of comparable complexity to Dutch found it easier to learn compared to those whose first languages were considered less complex [5]. On the whole, English is a less morphologically complex language, so L1 English speakers tend to struggle with learning Dutch more than, say, a Czech speaker [5].

This concept makes sense: if you are used to saying “I don’t want to do anything,” then the Spanish double negation “no quiero hacer nada” or the Hebrew “ani lo rotzah lasot shum dbar” (I no want to do nothing) might be a hard adjustment to make. However, the more surprising element of second language acquisition is that the L2 can work backwards, leaving lasting marks on the speaker’s native language- regardless of the age of acquisition [6]. In a 2007 study, the way adult bilingual English and Spanish speakers interpreted the same complex sentence differed based on their circumstances [7]. The sentence, “El policía arrestó a la hermana del criado que estaba enferma desde hacía tiempo'' translates in English to “The police arrested the sister of the young man who was ill for some time.” Native Spanish speakers who primarily lived in Spanish-speaking environments identified that the sister was the one who was sick. However, native Spanish speakers who primarily lived in English-speaking environments identified that it was the young man who was sick. These

findings indicated that the L1 syntax, or the way words and phrases are arranged to make

meaningful sentences, is an example of brain plasticity. The syntax learned from birth is easily shaped by new experiences or a changing environment. Thus, grammar and sentence structure are not neurally inherent, rather they are a product of one’s environment [7]. This process is called parameter resetting, which operates independently of the learner's conscious control [8]. The subjects in the 2007 study could not explain why they interpreted the sentence in that manner, they just did [8].

One theory as to why L1 and L2 can influence each other is that learning syntax is a form of implicit, not explicit memory [8]. The best way to show this is to engage with a child in any conversation. We grow accustomed to correcting the grammatical errors children make by misapplying English rules: it’s moose, not meese, even though goose becomes geese. By the time a child is old enough to speak fluently, they’ll understand these differences and use them consistently in language production. But when you ask a child why moose stays moose but goose becomes geese and mouse is mice, they won’t be able to explain why, just that it is. This is the definition of implicit memory: things we know, but don’t know how we know [8]. Second language acquisition draws on a completely different learning process. Aside from learning through complete immersion, adults learn a second language through conscious, explicit learning [9].

Another study from 2019 expands on this idea of language learning as an implicit process, stating that because adults learn languages explicitly, they struggle to automatize the language the way early learners do [9]. This difference between implicit and explicit language learning leads to key differences in the development of neural networks, as identified in the study. Researchers split learners into three groups: early bilinguals who learned both L1 and L2 sequentially before the age of 6, late bilinguals who learned L2 after the age of 6, and finally very early bilinguals who learned both L1 and L2 simultaneously before the age of 6 [9].

In the early bilinguals, functional magnetic resonance imaging (fMRI) studies showed increased blood flow to a highly complicated network on the left side of the brain, which neuroscientists use as a proxy for neural activity occurring in specific brain regions [9]. This supports the theory that language production and understanding is generally housed in the left brain [9, 10]. Yet the systems activated tell us far more about how the brain sees language: as a task requiring the constant modulation of input from different brain areas. Among the systems activated were the inferior parietal lobe, which has been linked to attention and language processing, as well as the fusiform area [9]. The fusiform area, in the context of language, helps differentiate written words from non-meaningful scribbles. Finally, on the left side, the study also mentioned the dorsolateral prefrontal cortex, responsible for higher-level cortical tasks, like learning, attention, and memory [9, 11, 12]. Higher-level cortical tasks are more complex because they use different regions of the brain and require that those regions communicate with each other. For comparison, a simple neuronal task like withdrawing your hand from a hot stove has an automatic response that originates in the spinal cord and requires less cognitive input [13]. The right side of the brain showed blood flow in the cerebellum, which in recent years has been implicated in understanding grammar, fluent speech, and correcting mistakes in language [9, 14]. Bilaterally, the study showed blood flow in the higher auditory cortex posterior-medial frontal gyri and the insulae, which helps coordinate information transmission across many brain areas [9].

Late bilinguals showed activation in the fusiform area as well, and the superior parietal lobe, which is responsible for higher-order working memory and attention tasks [9]. There was also activation in the middle temporal gyrus, which has been implicated in semantic control, the ability to access and differentiate meaningful information, specifically in the context of language. In the right hemisphere, the study showed blood flow in the angular gyrus (also relevant to semantic control), the cerebellum, and the bilateral middle occipital cortex which is part of the vision processing system. Both early and late bilinguals activated Broca’s area, best known for its role in language production and connecting thoughts to fluent speech [9, 15].

Most interestingly, very early bilinguals showed globally less activation [9]. The fMRI showed some activity in the left hemisphere of the medial temporal lobe, and in both hemispheres of the cerebella, but nothing compared to the widespread networks implicated in early and late bilinguals [9]. This finding supports the initial idea that very early bilinguals learn language implicitly; the lack of a complex network shows that switching between languages does not fall under the category of a high-level cortical task for people who have been bilingual since birth, or close to it.

So, what does this mean? These differences perhaps indicate the change between implicit and explicit language learning. The very early bilinguals had little functional brain activation in this study, except in the cerebellum, which facilitates bilateral brain communication [9]. However, the early and late bilinguals activated a more complex network. The middle temporal gyrus and the auditory cortex play a role in identifying the difference between phonemes and sounds, while the fusiform area identifies words from other images. The parietal lobe is relevant to working memory and is likely active in bilinguals trying to recall words or structure [9].

Most relevant to the original question of what was happening in my brain when learning Hebrew, the study showed dorsolateral prefrontal cortex (DPFC) activation, which is usually linked to high-level cognitive tasks and processing [9]. In this case, the DPFC works to promote switching between languages and inhibits the language not in use [8]. Why do I mix up languages that have seemingly nothing in common? The answer appears to be that my DPFC is not correctly inhibiting the language not in use. A 2014 study backs up this concept, stating that in a bilingual brain, both languages are simultaneously activated, regardless of the language being spoken [6]. One study showed bilinguals had greater difficulty recognizing interlingual homographs (words that exist in two languages but have different meanings– like piano in Italian translates to floor in English), but were faster to recognize cognates (words with the same meaning in both languages, like the how the word hospital has the same meaning in Spanish and English) [16]. Monolinguals do not show the same behavior, indicating that bilinguals have different recognition speeds because both languages are simultaneously active in their brain [6].

A similar study in 2003, replicated the effect by testing listening comprehension [17]. The study asked Russian/English bilinguals to listen to English instructions and pick up the corresponding item. Researchers then examined how their eyes moved in response to the spoken instructions. It took longer for bilinguals to pick up an object that had a competing, phonologically similar object in the non-target language than objects that had no competitor, further supporting the idea of parallel activation [6, 17]. For example, when instructed to move the speaker, the set of objects in front of the subject contained both a speaker and spichki (the Russian word for matches). Russian-English bilinguals looked back and forth between the unrelated speaker and the matches, showing both languages were active at the same time, while English monolinguals did not [17].

Another piece of evidence that fits into the dual-language activation theory, is a 2012 study that showed that bilinguals have strengthened neural systems to deal with cognitive conflicts [18]. This study implicated the anterior cingulate cortex (ACC), which usually handles conflicting tasks, in their findings. Because bilinguals must avoid using words from a different language that express the same concept (unlike me with my jumbled “yesses”) they showed higher proficiency with unrelated high-level cognitive tasks. Per their hypothesis, the study showed that increased gray matter– or increased use of the neural networks in the ACC leads to lower levels of neural conflict in bilinguals. By constantly inhibiting one language from being used in different situations, bilinguals strengthen neural networks in the ACC and DPFC, making language switching almost unconscious [18].

Yet for me, someone who’s still learning, and doesn’t switch between English and Hebrew on a functional daily basis, these networks are not as well developed and allow for frequent slip-ups. The jumbled mess of words in my brain, from two completely distinct language families and regions of the world, isn’t a sign of poor learning, but instead a sign that my brain is trying. The neurons in my brain are firing, and despite the occasional lapses, they have managed to inhibit one language so I can speak another. When you truly think about how my English-speaking brain has the potential to speak in a completely different language, it really is an incredible thing.


1. Ramirez, N. F., & Kuhl, P. (2017). The Brain Science of Bilingualism. YC Young Children, 72(2), 38–44.

2. Faust, T., Gunner, G., & Schafer, D. P. (2021). Mechanisms governing activity-dependent synaptic pruning in the mammalian CNS. Nature reviews. Neuroscience, 22(11), 657–673.

3. Kuhl, P. K., Tsao, F.-M., & Liu, H.-M. (2003). Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proceedings of the National Academy of Sciences of the United States of America, 100(15), 9096–9101.

4. Kuiken, F. (2023). Linguistic complexity in second language acquisition. Linguistics Vanguard, 9(s1), 83–93.

5. van der Slik, F., van Hout, R., & Schepens, J. (2019). The role of morphological complexity in predicting the learnability of an additional language: The case of La (additional language) Dutch. Second Language Research, 35(1), 47–70.

6. Kroll, J. F., Bobb, S. C., & Hoshino, N. (2014). Two Languages in Mind: Bilingualism as a Tool to Investigate Language, Cognition, and the Brain. Current Directions in Psychological Science, 23(3), 159–163.

7. Dussias, P. E., & Sagarra, N. (2007). The effect of exposure on syntactic parsing in Spanish-English bilinguals. Bilingualism, 10(1), 101–116.

8. Sanz, C., & Leow, R. P. (2010). Implicit and Explicit Language Learning : Conditions, Processes, and Knowle...: EBSCOhost. Retrieved October 4, 2023, from

9. Cargnelutti, E., Tomasino, B., & Fabbro, F. (2019). Language Brain Representation in Bilinguals With Different Age of Appropriation and Proficiency of the Second Language: A Meta-Analysis of Functional Imaging Studies. Frontiers in Human Neuroscience, 13. Retrieved from

10. Güntürkün, O., Ströckens, F., & Ocklenburg, S. (2020). Brain Lateralization: A Comparative Perspective. Physiological Reviews, 100(3), 1019–1063.

11. Bzdok, D., Hartwigsen, G., Reid, A., Laird, A. R., Fox, P. T., & Eickhoff, S. B. (2016). Left inferior parietal lobe engagement in social cognition and language. Neuroscience and biobehavioral reviews, 68, 319–334.

12. Sturm, V. E., Haase, C. M., & Levenson, R. W. (2016). Emotional Dysfunction in Psychopathology and Neuropathology. In Genomics, Circuits, and Pathways in Clinical Neuropsychiatry (pp. 345–364). Elsevier.

13. Moini, J., & Piran, P. (2020). Cerebral cortex. In Functional and Clinical Neuroanatomy (pp. 177–240). Elsevier.

14. Starowicz-Filip, A., Chrobak, A. A., Moskała, M., Krzyżewski, R. M., Kwinta, B., Kwiatkowski, S., … Zielińska, D. (2017). The role of the cerebellum in the regulation of language functions. Psychiatria Polska, 51(4), 661–671.

15. Stinnett, T. J., Reddy, V., & Zabel, M. K. (2023). Neuroanatomy, Broca Area. In StatPearls. Treasure Island (FL): StatPearls Publishing. Retrieved from

16. Dijkstra, T., Grainger, J., & Van Heuven, W. J. B. (1999). Recognition of Cognates and Interlingual Homographs: The Neglected Role of Phonology. Journal of Memory and Language, 41(4), 496–518.

17. Marian, V., & Spivey, M. (2003). Competing activation in bilingual language processing: Within- and between-language competition. Bilingualism, 6(2), 97–115.

18. Abutalebi, J., Della Rosa, P. A., Green, D. W., Hernandez, M., Scifo, P., Keim, R., … Costa, A. (2012). Bilingualism Tunes the Anterior Cingulate Cortex for Conflict Monitoring. Cerebral Cortex, 22(9), 2076–2086.

159 views0 comments

Recent Posts

See All


bottom of page