Neural Logic Re-Wired
by Kevin Rostam
art by Leslie Yao
As we venture through time, we are each accompanied by a neurobiological masterpiece weighing only three pounds that allows us to absorb information and perceive the world—the brain . While human beings are unique in a multitude of aspects, we are all remarkably similar in the neural processes that characterize our ability to voluntarily move our muscles, process visual information, and experience emotions . The only way that such capabilities can be central to all humans must be through a set of fundamental principles governing our brains’ function, a sort of neural code that is shared between us all. While this philosophy has long been the backbone of scientific research, the onset of the 21st century ushered in a renaissance through increasingly accessible computational machines, equipping neuroscientists across the globe with a new approach to investigating the mechanisms mediating the brain's ability to guide us through life. Some examples of approaches used to understand biological phenomena include but are not limited to the fields of “systems biology,” “computational biology,” or “theoretical neuroscience” . While these fields may differ in their focus and methods, their synergistic end goal is one: to understand the neural logic boards running through our brains.
Inception and Continuity
The field of “computational neuroscience,” a subdiscipline of neuroscience employing mathematical models and computer simulations to understand the dynamics of the nervous system, was established by Eric L. Schwartz who organized the field’s first conference in 1985; later that same year, the world’s first doctoral program in this discipline was established at the California Institute of Technology [3, 4]. Fast forward to the present, and countless academic institutions offer doctoral study in the derivatives of neural computation. Its impact in augmenting our understanding of the brain has become apparent in recent years, as research groups increasingly aim to collaborate with mathematical experts to make sense of probabilistic phenomena and large data . The field currently regards the human brain as a probabilistic machine that makes assumptions about real-world data inputs to enable our perceptions of reality . While the traditional approaches to biological research will always remain relevant, it comes as no surprise that the next generation of neuroscientists will be expected to demonstrate fluency in computation.
The Bayesian Approach to the Brain
The reality is that neuroscientists, while having made remarkable progress, are still working towards an all-encompassing explanation of brain function. What distinguishes the physiological processes of the nervous system from other biological systems is the inherent lack of predictability, or stochasticity, of the many variables governing any neural function . In accounting for this unpredictability, computationalists employ Bayesian statistics, a method to predict outcomes based on prior occurrences, to construct a mathematical forecast of future events . Its application to neuroscience is tasked with predicting the brain’s cognitive abilities, such as recognition of objects or acquisition of knowledge, based on statistical probabilities . The first major application of Bayesian Theory in modeling brain function emerged in 1983 from Fahlman, Hinton, and Sejnowski . The trio proposed that brains were akin to machines in making decisions about uncertainties of the external world, much like how an algorithm makes decisions in spite of unknown variables . An example of this is a model developed by Dayan and Hinton known as a Helmholtz machine . This model is a variant of a statistical learning method known as an artificial neural network, which can accurately reconstruct the original dataset while identifying its hidden elements and patterns. Imagine your wardrobe such that the articles of clothing you own encompass the dataset . An artificial neural network would take in your array of clothing as the data inputs and then recognize your preferences for certain styles or colors as a means of revealing the patterns in this dataset. The network would then proceed to recreate a new wardrobe that closely resembles your actual wardrobe. Thus, artificial networks embody the aforementioned “recognition network” that intakes data to reproduce the original dataset with high accuracy by employing a “generative network” to identify plausible values. The intersection between artificial neural networks and Bayesian theory is that the generation of values for unknown variables comes from patterns learned from prior datasets.
Let us consider another real world example to understand the Bayesian brain through the neural computations involved in vision. Imagine yourself situated at one end of a football field and someone situated on the opposite end of the field can hold up either an apple or an orange, and your task is to announce the identification made by your brain. If the photoreceptors in the back of your eye detect the color as orange, based on prior experiences of oranges being orange, your brain will identify the fruit as an orange. And thus based on the classification of color, your brain may assume the unknown variables of shape and texture to be round and bumpy despite not being able to directly visualize these characteristics. This notion that the brain registers quantities of information and subsequently makes learned judgments about unknowns based on prior experiences underlies much of the approach to computational neuroscience.
Hodgkin, Huxley, and Katz’s landmark paper in 1952 demonstrated the dynamics of an “action potential,” which occurs when a neuron sends an electrical signal down its axon . On the foundation of previous work by Cajal et al. that neurons can be organized as functional units, this development established that neurons perform biological “computations” by responding and reacting to messages from neighboring neurons . In understanding neuronal communication, it is important to differentiate between filament-like extensions known as axons and dendrites. An axon can be understood as a singular long extension responsible for “output” whereas dendrites are more numerous and are responsible for receiving “input” . Consistent with this, a neuron transmits an electrical signal down its axon and receives communications through its dendrites, which interface with other neuronal axons. In response to receiving such electrical signals, neurons open their voltage-sensitive ion channels and propagate the signal forward through further action potentials to relay neuronal messages to higher-order brain centers. It is important to note that action potentials follow the “all-or-none” law, which states that once a certain threshold is surpassed, a neuron will fire with an amplitude and velocity independent of the stimulus that created it. Think of reaching the threshold as having binary outcomes in terms of 1s and 0s, firing or not firing. It is important to differentiate that the rate at which neurons fire signals down their axons is a function of stimulus intensity, while the amplitude and velocity of each individual action potential remains unaltered, the frequency of repeated action potentials is impacted . This property of firing frequency has been implicated in memory formation, a topic discussed later .
While other computational approaches are concerned with the dynamics of neural networks, single-neuron modeling is concerned with understanding the biophysical characteristics of individual neurons that define their practicality . The makeup of the brain is distinctive in that different brain regions are composed of subsets of neurons that differ both in structure and electrical properties [17, 18]. While various models have been developed linking either the proteins created in neurons to the neuron’s electrical properties, or a neuron’s structure with its electrical properties, the notion that a single computational model may reliably portray all three of these elements has been highly disputed until recently [18, 19]. Initiatives taken by the Blue Brain Project spearheaded by École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have been the first to succeed in doing so as recently as 2022 . The group began by computationally reconstructing the first neuroanatomically accurate model of the mouse visual cortex, the region of the brain processing neuronal signals received from retinal neurons in the back of the eyes. After recording the electrical activity of single neurons, a technique known as electrophysiology, they created a dataset of the electrical characteristics of different neuronal populations. They then demonstrated that their computational model succeeded in predicting the type of neuron based on data inputs of the electrophysiological properties of the actual neurons. The groundbreaking nature of this innovation was attributed to the fact that the model was further able to distinguish between the different electrical properties of neurons in different brain regions and then accurately predict the gene regulation of the actual neurons. In doing so, the model was able to intake a biologically derived dataset and accurately depict another aspect of the neurobiology, the neuron type, and then use this computationally derived dataset to accurately compute a second aspect of the neurobiology, the neuron’s gene regulation . While the aforementioned result bears significance in and of itself, more important is the demonstrated power of computation in successfully connecting the patterns of three interacting yet mechanistically separate biological processes, further bolstering the notion the role computation will have in painting a holistic picture of neural dynamics.
Neuron Development and Axon Targeting
In progressing from Single-Neuron Modeling, the question of how neurons reach their proper targets arises. Given that humans come from a plethora of varied backgrounds, it is remarkable that neurons composing the brain’s structure largely make the same connections in the same places throughout development . This question is twofold in the fact that neurons must possess a system for controlling their ability to make connections, and another for mediating where these connections are made. It has been proposed that the formation of axons occurs such that they optimize the ratio between the cost of biological maintenance and maximal information storage during development, a theory dubbed “The Wiring Optimization Hypothesis” [21, 22]. In doing so, individual neurons are presumed to optimize brain connectivity in terms of axonal or dendritic extensions, which are holistically referred to as “wire length” . This proposal seems rather intuitive, as the human brain consists of 86 billion neurons packed in a mass approximately equal to the size of a cantaloupe . Accordingly, this theory was proposed as far back as 1899 by the father of modern neuroscience and Nobel Laureate Santiago Ramón y Cajal, in which he stated that the structure of axons are designed to save both space and time [13, 24].
Over 120 years later, the technologies to tackle these questions have emerged through a computational approach known as graph theory . In visualizing graph theory, imagine a
set of five distributed circles, where each circle makes connections with two other circles via line segments. In doing so, you conceive a neural network where the circles represent neurons and the line segments represent axons that contact other neurons. In understanding the composition of a neuron, one can visualize a tree with branches that project above the ground and roots that make connections to the ground, representative of dendritic branches and axonal roots, which are able to receive and send information respectively. In combination with graph theory, a neuron’s cell body serves as the trunk of a tree, and gives rise to axonal and dendritic extensions that make connections with other neurons.
In employing graph theory, computational neuroscientists at the Polytechnic University of Madrid sought to determine whether neurons were making the most efficient connections by minimizing wire length . To do so, they first carried out biological experimentation and microscopy to visualize the locations of neurons in brain regions of interest, and the connections between neurons enabled by axons and dendrites in the biological network. Through the computational approach of graph theory, they not only designed an algorithm specifying the connections made in the biological specimens, but also designed a simulation to return the most optimal wire-lengths of all neuronal connections while constraining the model such that numbers and locations of connections made in the biological specimens were preserved and replicated by the model. In doing so, they demonstrated that the computationally derived three-dimensional reconstructions of neuronal axons and dendrites, together known as a neuronal “tree,” demonstrated significantly similar wire lengths to those of the biologically derived specimens. This finding bolstered the notion that axonal and dendritic development are characterized by an optimal balance between their biological cost of maintenance and the number of connections made, a computationally aided proof of principle demonstrating that neurons have secondary roles as economists .
Equally important to the development of neuronal projections are the processes by which neurons reach their intended targets in the brain. A wide variety of protein pathways and their underlying signaling cascades mediate neuron-to-neuron recognition and avoidance . Important to understanding this process are growth cones, which are visualized as a “nub” on the end of neuronal axons . The membranes that enclose these growth cones are composed of charged proteins known as cell adhesion molecules . The charges of these proteins are thought to either guide neuronal growth cones to their proper targets or repel them in order to prevent the formation of incorrect contacts . Many of these protein families are present in both vertebrate and invertebrate systems, indicating a common evolutionary mechanism for neuronal avoidance and attraction . While biological experimentation continues to uncover the existence and combinatorial mechanisms of these cell surface proteins, computational models are able to predict the guidance of growth cones, although it is unlikely that these models account for the entirety of the dynamics underlying axon pathfinding . With neuronal pathfinding being among the most pressing questions of modern neuroscience, it is certain that the applicability of computation will provide insight into its underpinnings and aid experimentalists in their endeavors.
Learning, Memory, and Synaptic Plasticity
From the moment you learn to speak, to your progression through secondary education, to your ability to remember the details of this article as you acquire the information, both learning and memory are at play. To learn is to remember, hence it is no surprise that investigation into how our brains both acquire and store information are carried out simultaneously. The processes by which memories are formed and stored was heavily sought after by neuroscientists Eric Kandel, Paul Greengard, and Arvid Carlsson, who implicated synaptic plasticity as the underlying mechanism . A synapse is the space between two connecting neurons, where chemical messengers known as neurotransmitters are typically released by the presynaptic neuron, where they then cross the synaptic space and reach the postsynaptic neuron . It is this same principle through which action potentials are propagated from neuron to neuron. When an action potential is sent down an axon, upon reaching the axon terminal, the electrical activity incites the release of neurotransmitters, which make contact with the postsynaptic neuron to either open or close ion channels . In the case of excitatory postsynaptic potentials, also known as EPSPs, an excitatory neurotransmitter is released by the presynaptic axon terminal and reaches the postsynaptic neuron to induce the opening of ion channels, causing an influx of positive ions into the cell body of the neuron. If the magnitude of positive ions entering the cell body surpasses a threshold, it generates a subsequent action potential in the postsynaptic neuron. The opposing forces are inhibitory postsynaptic potentials, also known as IPSPs, which release inhibitory neurotransmitters that close ion channels to prevent a subsequent axon potential from being generated .
Changes in the dynamics of these synapses are referred to as synaptic plasticity and were demonstrated to be central in memory formation. In a series of elegant experiments, Kandel and his team demonstrated that repeated neuronal activity increased the strength of synaptic connections, demonstrating a principle for both short-term and long-term memory . In the formation of short-term memory, chemical changes occur in synapses, such as an increase in the release of neurotransmitters. In the formation of long-term memories, synaptic activity causes changes to gene expression within neurons to send new proteins to neuronal membranes, thereby altering their architecture to facilitate increased sensitivity during future neuronal communication [35, 36]. More recently, these fundamental computational principles have made ground clinically in the understanding of neurodegenerative conditions such as Alzheimer’s disease . Clinicians working alongside computationalists to analyze information processing centers of the brain have characterized neural patterns that underlie memory loss in this neuropathological state. This has culminated into one of the first all-encompassing information processing models spanning neuroanatomy, computational neuroscience, and clinical medicine .
The majority of computational approaches seeking to link learning, memory, and synaptic plasticity employ the philosophy of Hebbian learning with the principle that repeated neuronal activity strengthens synaptic connections, as neurons that “wire together fire together,” making future transmission more sensitive and efficient . A computational neuroscientist studying Hebbian learning would propose that by repeatedly studying information for your upcoming exam, your brain optimizes for recall and application through the synaptic strengthening of relevant neuronal cell populations. Due to the biological understanding of these dynamics being well-defined, the computational approaches to this area of investigation have emerged with the majority of artificial neural networks demonstrating two processes of synaptic plasticity: learning and recall . The learning phase of the model consists of training the network to achieve an intended response to a
specified input; its performance in doing so is referred to as synaptic efficiency. In the recall phase, the network is only required to emit responses to the given input, with the synaptic efficiency calculated in the learning phase being kept constant .
The Hopfield network is a representative artificial network that encompasses these principles, and was developed by Professor J. J. Hopfield of the California Institute of Technology . A Hopfield network synthesizes the philosophy of Hebbian learning and the previously noted dynamics, and employs “iterative” learning phases to generate the synaptic efficiencies, or “weights,” of successively stored inputs. Synaptic weight is denoted by the relative importance of a neuron’s output to downstream neurons, and not the direct output of the neuron itself as in an all-or-none action potential . While this is done through multiple learning phases, the process begins by generating a first pattern, in which the input and required output of the neurons comprising the neural network are mixed. The activity of the artificial neurons reaches desired values through successive
rounds of learning, which sets the synaptic efficiency  . The second element of the network, the recall phase, does not alter the synaptic weights and calls for the model to perform complete recall of partial inputs.
The Hopfield network can be further divided into supervised and unsupervised learning approaches, which are thought to be representative of the learning patterns of various neural circuits . The assumption made is that certain circuits, such as those regulating breathing, do not require much modification over time, whereas the neuron’s encompassing the memory centers of the brain are adjusted far more frequently  . Supervised learning introduces a constraint by requiring the synaptic weights of neurons in a computational model to reach a desired output over time. Unsupervised learning differs in the fact that changes to synaptic weight only occur based on actual neuronal output, hence a lack of “supervision” by a required output . It is important to note that the changes to synaptic weight, or output, affects all neurons associated with the system, resulting in dynamic changes over time in both of these learning models..
In concluding this section, it is important to contrast the brain and the neuron. While the brain as a whole allows us to perceive the world, neurons behave as individual entities that only contribute to this process through their summative patterns, a phenomena denoted as “emergence,” where a system demonstrates properties that are indiscernible from only observing individual parts . While single neurons are biologically “programmed” to optimize, it is their communicative behavior that enables our brains to function . In bridging the gap to explaining the dynamics of neurobiology, computational neuroscientists are tasked with producing models representative of specific forms of learning, and proposing principles that govern the dynamics of neural circuit formation occurring in our consciousness-endowing supercomputers.
While the renaissance of computational neuroscience has arrived, the reality persists that biological experimentation serves as the ground truth for observing and proving the dynamics of neural phenomena. The rise of computation as a tool that recognizes neural patterns continues to be enormously insightful for researchers, enabling them to be creative in new ways to address vital questions. Despite largely being yet to make its way into the curriculum of neuroscience education, it is likely that as its role in the domain of research becomes further established, experimentalists aspiring to become the future of the field will begin being exposed to these computational and mathematical tools. As computational applications become increasingly more commonplace in the modern world, the groundbreaking nature of computational neuroscience will continue to emerge—an almost certain yet gradual process that will result in a holistic understanding of neural function and of the principles that govern the interacting neural activity of our brains.
1. Wang, X.-J., Hu, H., Huang, C., Kennedy, H., Li, C. T., Logothetis, N., … Zhou, D. (2020). Computational neuroscience: a frontier of the 21st century. National Science Review, 7(9), 1418–1422. https://doi.org/10.1093/nsr/nwaa129
2. Schutter, E. D. (2008). Why Are Computational Neuroscience and Systems Biology So Separate? PLOS Computational Biology, 4(5), e1000078. https://doi.org/10.1371/journal.pcbi.1000078
3. Schwartz, E. L. (1990). Computational Neuroscience. Cambridge, MA, USA: MIT Press.
4. Bower, J. M. (Ed.). (2013). 20 Years of Computational Neuroscience (Vol. 9). New York, NY: Springer. https://doi.org/10.1007/978-1-4614-1424-7
5. Insel, T. R., Landis, S. C., & Collins, F. S. (2013). The NIH BRAIN Initiative. Science, 340(6133), 687–688. https://doi.org/10.1126/science.1239276
6. Taniguchi, T., Yamakawa, H., Nagai, T., Doya, K., Sakagami, M., Suzuki, M., … Taniguchi, A. (2022). A whole brain probabilistic generative model: Toward realizing cognitive architectures for developmental robots. Neural Networks, 150, 293–312. https://doi.org/10.1016/j.neunet.2022.02.026
7. Guigon, E., Baraduc, P., & Desmurget, M. (2008). Optimality, stochasticity, and variability in motor behavior. Journal of Computational Neuroscience, 24(1), 57–68. https://doi.org/10.1007/s10827-007-0041-y
8. Kording, K. P. (2014). Bayesian statistics: relevant for the brain? Current Opinion in Neurobiology, 25, 130–133. https://doi.org/10.1016/j.conb.2014.01.003
9. Parr, T., Rees, G., & Friston, K. J. (2018). Computational Neuropsychology and Bayesian Inference. Frontiers in Human Neuroscience, 12. Retrieved from https://www.frontiersin.org/articles/10.3389/fnhum.2018.00061
10. Fahlman, S., Hinton, G., & Sejnowski, T. (1983). Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines. (pp. 109–113). Presented at the [No source information available].
11. Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz Machine. Neural Computation, 7(5), 889–904. https://doi.org/10.1162/neco.19184.108.40.2069
12. Hodgkin, A. L., Huxley, A. F., & Katz, B. (1952). Measurement of current-voltage relations in the membrane of the giant axon of Loligo. The Journal of Physiology, 116(4), 424–448. https://doi.org/10.1113/jphysiol.1952.sp004716
13. Cajal, S. R. y. (2018). Textura del Sistema Nervioso del Hombre y de los Vertebrados, 1899–1904. In Textura del Sistema Nervioso del Hombre y de los Vertebrados, 1899–1904 (pp. 216–219). Princeton University Press. https://doi.org/10.1515/9780691183978-016
14. Purves, D. (n.d.). Neuroscience, Sixth Edition - Learning Link. Retrieved February 21, 2023, from https://learninglink.oup.com/access/purves-6e
15. Nabavi, S., Fox, R., Proulx, C. D., Lin, J. Y., Tsien, R. Y., & Malinow, R. (2014). Engineering a memory with LTD and LTP. Nature, 511(7509), 348–352. https://doi.org/10.1038/nature13294
16. Gupta, P., Balasubramaniam, N., Chang, H.-Y., Tseng, F.-G., & Santra, T. S. (2020). A Single-Neuron: Current Trends and Future Prospects. Cells, 9(6), 1528. https://doi.org/10.3390/cells9061528
17. Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G., & Wu, C. (2004). Interneurons of the neocortical inhibitory system. Nature Reviews Neuroscience, 5(10), 793–807. https://doi.org/10.1038/nrn1519
18. Gouwens, N. W., Sorensen, S. A., Berg, J., Lee, C., Jarsky, T., Ting, J., … Koch, C. (2019). Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nature Neuroscience, 22(7), 1182–1195. https://doi.org/10.1038/s41593-019-0417-0
19. Nandi, A., Chartrand, T., Van Geit, W., Buchin, A., Yao, Z., Lee, S. Y., … Anastassiou, C. A. (2022). Single-neuron models linking electrophysiology, morphology, and transcriptomics across cortical cell types. Cell Reports, 40(6), 111176. https://doi.org/10.1016/j.celrep.2022.111176
20. Ackerman, S. (1992). The Development and Shaping of the Brain. Discovering the Brain. National Academies Press (US). Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK234146/
21. Koulakov, A. A., & Chklovskii, D. B. (2001). Orientation Preference Patterns in Mammalian Visual Cortex: A Wire Length Minimization Approach. Neuron, 29(2), 519–527. https://doi.org/10.1016/S0896-6273(01)00223-9
22. Chklovskii, D. B., Schikorski, T., & Stevens, C. F. (2002). Wiring Optimization in Cortical Circuits. Neuron, 34(3), 341–347. https://doi.org/10.1016/S0896-6273(02)00679-7
23. Herculano-Houzel, S. (2012). The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. Proceedings of the National Academy of Sciences, 109(supplement_1), 10661–10668. https://doi.org/10.1073/pnas.1201895109
24. Herculano-Houzel, S. (2009). The Human Brain in Numbers: A Linearly Scaled-up Primate Brain. Frontiers in Human Neuroscience, 3, 31. https://doi.org/10.3389/neuro.09.031.2009
25. Bullmore, E., & Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. https://doi.org/10.1038/nrn2575
26. Anton-Sanchez, L., Bielza, C., Benavides-Piccione, R., DeFelipe, J., & Larrañaga, P. (2016). Dendritic and Axonal Wiring Optimization of Cortical GABAergic Interneurons. Neuroinformatics, 14(4), 453–464. https://doi.org/10.1007/s12021-016-9309-6
27. Honig, B., & Shapiro, L. (2020). Adhesion Protein Structure, Molecular Affinities, and Principles of Cell-Cell Recognition. Cell, 181(3), 520–535. https://doi.org/10.1016/j.cell.2020.04.010
28. Geraldo, S., & Gordon-Weeks, P. R. (2009). Cytoskeletal dynamics in growth-cone steering. Journal of Cell Science, 122(20), 3595–3604. https://doi.org/10.1242/jcs.042309
29. Cosmanescu, F., Katsamba, P. S., Sergeeva, A. P., Ahlsen, G., Patel, S. D., Brewer, J. J., … Shapiro, L. (2018). Neuron-Subtype-Specific Expression, Interaction Affinities, and Specificity Determinants of DIP/Dpr Cell Recognition Proteins. Neuron, 100(6), 1385-1400.e6. https://doi.org/10.1016/j.neuron.2018.10.046
30. Lawrence Zipursky, S., & Grueber, W. B. (2013). The Molecular Basis of Self-Avoidance. Annual Review of Neuroscience, 36(1), 547–568. https://doi.org/10.1146/annurev-neuro-062111-150414
31. Roccasalvo, I. M., Micera, S., & Sergi, P. N. (2015). A hybrid computational model to predict chemotactic guidance of growth cones. Scientific Reports, 5(1), 11340. https://doi.org/10.1038/srep11340
32. Südhof, T. C., & Malenka, R. C. (2008). Understanding Synapses: Past, Present, and Future. Neuron, 60(3), 469–476. https://doi.org/10.1016/j.neuron.2008.10.011
33. Magee, J. C., & Grienberger, C. (2020). Synaptic Plasticity Forms and Functions. Annual Review of Neuroscience, 43(1), 95–117. https://doi.org/10.1146/annurev-neuro-090919-022842
34. Kandel, E. R., Dudai, Y., & Mayford, M. R. (2014). The Molecular and Systems Biology of Memory. Cell, 157(1), 163–186. https://doi.org/10.1016/j.cell.2014.03.001
35. Castellucci, V., Pinsker, H., Kupfermann, I., & Kandel, E. R. (1970). Neuronal Mechanisms of Habituation and Dishabituation of the Gill-Withdrawal Reflex in Aplysia. Science, 167(3926), 1745–1748. https://doi.org/10.1126/science.167.3926.1745
36. Asok, A., Leroy, F., Rayman, J. B., & Kandel, E. R. (2019). Molecular Mechanisms of the Memory Trace. Trends in Neurosciences, 42(1), 14–22. https://doi.org/10.1016/j.tins.2018.10.005
37. Jones, D., Lowe, V., Graff-Radford, J., Botha, H., Barnard, L., Wiepert, D., … Jack, C. (2022). A computational model of neurodegeneration in Alzheimer’s disease. Nature Communications, 13(1), 1643. https://doi.org/10.1038/s41467-022-29047-4
38. Munakata, Y., & Pfaffly, J. (2004). Hebbian learning and development. Developmental Science, 7(2), 141–148. https://doi.org/10.1111/j.1467-7687.2004.00331.x
39. Kuriscak, E., Marsalek, P., Stroffek, J., & Toth, P. G. (2015). Biological context of Hebb learning in artificial neural networks, a review. Neurocomputing, 152, 27–35. https://doi.org/10.1016/j.neucom.2014.11.022
40. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8), 2554–2558. https://doi.org/10.1073/pnas.79.8.2554
41. Zhang, Y., Guo, D., & Li, Z. (2013). Common nature of learning between back-propagation and Hopfield-type neural networks for generalized matrix inversion with simplified models. IEEE transactions on neural networks and learning systems, 24(4), 579–592. https://doi.org/10.1109/TNNLS.2013.2238555
42. Bunge, M. (1977). Emergence and the mind. Neuroscience, 2(4), 501–509. https://doi.org/10.1016/0306-4522(77)90047-1