Back to Archived Journals » Neuroscience and Neuroeconomics » Volume 3

Decisions, dopamine, and degeneracy in complex biological systems

Authors Regan C

Received 22 October 2013

Accepted for publication 17 December 2013

Published 31 January 2014 Volume 2014:3 Pages 11—18

DOI https://doi.org/10.2147/NAN.S32234

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 4



Ciaran M Regan

School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Belfield, Dublin, Ireland

Abstract: The neurobiological and computational analysis of value-based decision-making rests within the domain of neuroeconomics which has the goal of providing a biological account of human behavior relevant to both natural and social sciences. This review proposes a framework to investigate different aspects of the theoretical and molecular neurobiology of decision-making. In order to learn how to make good decisions, the brain needs to compute a separate value signal that measures the desirability of the outcomes that were generated by its previous decisions. The framework presented here combines aspects of current ideas relating to information processing by the hippocampal formation and how these relate to the phasic midbrain dopaminergic firing that occurs in response to the spatial and motivational aspects of rewarding events in the environment. The activities of hippocampal ensembles are considered to reflect a continuous updating process for attended experiences, defining both regular and irregular stimuli, environments, and actions, that are rapidly encoded as schemas into pre-existing knowledge bases.

Keywords:
hippocampus, schemas, synapse assemblies, cell assemblies, synapse plasticity

Neuroeconomics, probability theory, and game theory

The computations necessary for an organism to execute an optimal course of action require comparison of incoming sensory information with its stored representation of world structure. Mathematical analysis of this behavior falls into the domain of “neuroeconomics.” Ideally, this scientific method aims to provide models of simple reflexes with predictable motor responses that may serve in understanding more complex reflexes with unpredictable motor responses.

Probability theory has also been employed in an attempt to understand efficient decision-making; however, its relationship to neural function remains largely unexplored. This is due to our uncertainty about events for which we have only partial or inaccurate knowledge. Bayesian probability, a form of propositional logic, can be used to formulate the most beneficial behavioral outcome using a standard set of procedures designed to calculate and assign a quantity to our current state of knowledge or that derived from previously assigned probabilities. Bayesian theorem can, therefore, provide an approach to understanding how current knowledge might predict behavioral actions against which the neural system might evolve functional capabilities.

The view that animals evolve behavior that interacts with the probabilistic nature of an inherently uncertain world has also involved game theory. This strategy has been employed in order to identify optimal actions in situations populated by intelligent competitors as opposed to decision-making in a passive environment. Such games require mixed strategies to arrive at an optimal equilibrium using determinate and indeterminate behavioral strategies that are uncertain and unknown to competitors. In the “hawk-dove” game, for example, individuals can behave unpredictably from encounter to encounter or develop unpredictably into a hawk or dove for life.

It would seem, therefore, that much of human and animal behavior remains chaotic and unpredictable. Value-based decisions are capricious. They are selected from several possibilities and based on the subjective value an animal places on each possible outcome.

Learned associations and decision-making

Pavlovian, habitual, and goal-directed systems are forms of learning systems associated with reward evaluation. Ivan Pavlov, for example, hypothesized that these behavioral regularities arise from experience-dependent formation of sensory-motor linkages between sight of food and activation of salivary glands and bell-detecting neurons.1 These sensory-motor linkages are core to the reflex theory of learning and a mechanism for this empirical rule of learning. This enduring association of separate events was formulated by Donald Hebb.2 He suggested that neurons might store knowledge by changing their synaptic strength in accordance with local activity in sensory-motor reflexes. The biophysical nature of the change in synapse strength was shown in the work of Bliss and Lømo,3 who demonstrated synapse strengthening to occur when long-lasting presynaptic and postsynaptic activities co-occurred, an event they termed long-term potentiation (LTP).

The utility of this sensory-to-motor linkage has been challenged because much determinate behavior is formed by active elements that do not necessarily include sensory stimuli. These would include more complex behavioral systems such as those associated with cognition and volition. Behavior has to be organized around specific goals and these require elements beyond the boundaries of basic sensory-to-motor linkages.

It becomes necessary, therefore, to understand precisely what the neurobiological system is attempting to achieve as a whole. Secondly, we need to know how the brain hardware implements these solutions. Initially, a representation of the decision problem must be computed. This may involve analysis of internal (eg, hunger) and external (eg, threat) states, and a possible course of action (eg, secure food). The action to be pursued must have a reliable prediction of value or benefit based on stored information of outcome desirability. The outcome of the selected action, in turn, must be used to update stored representations to improve future decision-making processes.

In the first instance, it is very difficult to have a clear definition as to the size of the computation that constitutes the complete behavior and how the brain has evolved to process these computational goals. In order to produce behavioral responses that are adaptive, it is necessary to integrate sensory data with stored knowledge. It has been argued that the brain manages such behaviors as a consequence of modules that are functionally interrelated but often independent.4

The functional properties of such systems depend largely on the structural connectivity among the neurons of each module, and their exact pattern of specificity is not genetically predetermined with any great precision. No two neurons in a given module have an identical overall shape, and similarly there are no two equivalent neurons between the modules of two individual animals, even if they are genetically identical. This diversity in neuronal connectivity pattern arises in part from the exuberant production of neuronal processes that compete for targets in an activity-dependent manner during development. Neural systems, therefore, are degenerate because they are structurally different but perform the same function or yield the same output depending on the context in which they are expressed. Degeneracy is unlike redundancy, which occurs when two identical systems perform the same function.5

Understanding how these autonomous systems lead to behavioral modification remains a daunting task. Substantial progress in neuroscience now permits us to evaluate the neural events that attend decision-making, how they relate to the learned behaviors of humans and animals, and how they may allow a better understanding of economic behavior.

Prediction errors and behavioral adaptation

Behavior is significantly influenced by predictions of pending reward events in the future. This is based on observed relationships that exist between the phasic firing in midbrain dopaminergic neurons and associative learning of reward-predictive cues. In this model, the potentiation or depression of connection strengths is based on neural implementation of a temporal difference rule. This rule predicts the difference between one’s rational expectation of future rewards and information that leads to a revision of such expectations, ie, the prediction error rule.6 This prediction error rule has been related to the activity of mid-brain dopaminergic neurons as their phasic activity is modulated in response to reward.7,8 Outputs from these midbrain dopamine neurons arise from the substantia nigra and innervate areas of the frontal cortex involved in planning motor movements, and the medial mesolimbic and mesocortical dopamine systems that arise from the ventral tegmental area (VTA) provide the motivational function that completes the reward response.9 This idea is attractive in its simplicity because it provides a framework for how one might achieve a greater number of rewards. It fails, however, to account for the actions of dopamine in maintaining sustained behavior.

More recently, an extended form of reward-predictive striatal dopamine signaling has been observed in rats as they move toward more distant goals.10 This prolonged tonic dopamine signaling gradually increases, or ramps, as animals traverse mazes for the purpose of obtaining more distant rewards. These dopamine signals appear to be related to preferences for rewards in different locations in a manner suggesting that they respond to a spatial cognitive map formed by place cell assemblies within the hippocampal formation. Place cells are activated in sequence as a rodent navigates a pathway and, as such, can be considered as memory amenable to consolidation and retrieval.11 For example, the rhythmic firing of hippocampal theta wave patterns changes in a systematic manner as an animal moves through an environment, a phenomenon known as phase precession, and these patterns alter place cell firing, which improves the accuracy of place coding and flexibility of spatial navigation.12

Given the role of the hippocampal-ventral striatal pathway in regulating motivation and in the acquisition of place-reward associations, these functional projections have the potential to support the learning and recall of place-reward relationships.13,14 The ramping of spiking dopaminergic neurons in the ventral striatum, which occurs during navigation tasks and is linked to hippocampal theta rhythms, therefore suggests a temporal coding mechanism by which spatial and reward signals might be combined and amenable to encoding and retrieval of spatial experience.15 Such midbrain dopaminergic signals might not directly influence a decision-making process, but would certainly represent learned estimations of reward that, in turn, influence behavior over longer periods of time.

Thus, learning based on prediction errors is not only about concepts like value and choice, but also about the role of dopamine in learning and memory consolidation functions that establish the motivational foundation of most goal-directed behavior. Although firing of VTA dopaminergic cells is increased by unexpected rewards and reduced if an expected reward is omitted, their firing can also be triggered by novel stimuli that do not involve reward, and this novelty-dependent dopaminergic activity has been traced back to the hippocampus.9,16 These findings suggest that the VTA may be critical in determining the significance of a reward but that a VTA/hippocampus dopaminergic loop controls the entry and processing of behaviorally significant information into long-term memory.17 Most sensory information derived from the environment is projected by the cortex to the hippocampal dentate gyrus, a major termination point for these unidirectional excitatory projections and the first point in processing information that ultimately gates the conversion of short-term memory into new declarative memories. Activation of this loop can occur through dopaminergic D1 receptor facilitation of hippocampal LTP following detection of novel information.18 Within the hippocampus, the cornu ammonis subfield 1 (CA1) region, acting as a comparator, triggers a process within the dentate and CA3 that predicts the likely outcome of events based on stored memory sequences.19 The resulting novelty signal is then conveyed to the VTA where it contributes to novelty-dependent firing of dopaminergic cells.

Decisions, therefore, may be guided by associative memories based on past experiences, given that receipt of a reward activates two simultaneous and interactive processes, ie, direct learning of stimulus-reward associations in the striatum and, via the hippocampus, their relationship with associated items stored in long-term memory. Hippocampal encoding of associations between rewards and previous events not only facilitates reactivation of their neural representations when one or other item is subsequently encountered, it also provides a mechanism by which positive experiences can alter the value of paired associations not previously rewarded and bias their value when associations are not explicitly remembered.

Therefore, past signals related to value must be correlated with categorized signals from the outside world that are stored as memory in the conceptual areas of the cortex. As cues from the environment enter into this mapping, several sensory modalities lead to behaviors and/or motor responses that over time alter how these signals are perceived. Thus, these mappings are dynamic and change with time and behavior through the alteration of existing schemas or formation of new ones. These ideas are shown in Figure 1. No new modular system is required, only the evolution of anatomical structures selected for these novel functions.

Figure 1 A model of decision-making based on integration of sensory data with stored knowledge. Previous value-related signals, set by internal control systems, are correlated via the ventral tegmental area/hippocampus dopaminergic loop, to current conceptual categorization of environmental signals. Perturbations at different levels can reorganize these conceptual categorizations via the hippocampal/cortical loop through generation of new schemas or modification of existing schemas.

Hippocampal synapse plasticity as a cellular basis for learning

Cell assemblies

The two best studied forms of learning and forgetting are LTP and long-term depression (LTD), and these cellular models of synaptic plasticity have been variously linked to the ideas of Donald Hebb and generally referred to as his cell assembly rule of learning.2 This concept states that cell assemblies are formed by strengthening the connections between neurons that are “repeatedly and persistently active together” and that these strong connections enable the network to perform an associative retrieval of memories. LTP and LTD are observed in several brain regions, and each is linked to an effect of dopamine receptor activation. In the hippocampus, LTP and LTD are associated with excitatory synapses on pyramidal cells. Here LTP is blocked by dopamine D1 receptor antagonists and facilitated by D1 receptor agonists, whereas LTD is potentiated by D1 agonists or D2 antagonists and blocked by D1 antagonists and D2 agonists.18,20 Similar dopaminergic mechanisms are involved in the modulation of LTP and LTD in the VTA.21

Neuromodulators, such as dopamine, noradrenaline, and acetylcholine, control the functional state of the hippocampus during the encoding and recall of memory. Transient dopamine-dependent states in the hippocampus, however, favor memory encoding and synaptic potentiation, possibly by adding motivational significance to experiences.22,23 Hebbian assemblies may also play a role in decision-making models in which, for example, two populations can represent choices A or B.24 Strictly speaking, such assemblies would be fixed and unable to change easily to allow rapid alteration of decision-making strategies based on previous experience. Such information is stored as episodic memories that do not exist in isolation but share features with other closely related memories structured in a flexible relational network that can interleave, update, and consolidate new information.25

Synapse assemblies

Development of relational networks may not necessarily rely on cell assemblies formed by strengthening or weakening of the connections between neurons. Structural plasticity at the axodendritic interface, arising from dendritic and axonal growth and leading to de novo synaptogenesis, may provide mechanisms for information storage that transcend the cell assembly formations predicted by the classical Hebbian learning scheme. Axons, dendrites, and spines are highly dynamic structures that can emerge within minutes in the adult brain, and these structural changes have long been proposed to be an important mechanism for long-term information storage.2629

Empirical studies support the idea that the structural plasticity of spines is linked to memory-associated circuit reorganization.30 For example, quantitative analysis of spine density in vivo shows change in the somatotopic representation induced by whisker trimming to be associated with stabilization of a new subset of cortical spines over a period of days.31

Dendritic spines are rapidly formed and selectively stabilized as cortical synapses as a result of motor learning, and the magnitude of cortical spine formation has been linearly correlated with the number of successful trials in a reward-based motor reaching task.29,32 Transient spine increases occur in the hippocampal dentate gyrus during natural forms of learning, such as those associated with avoidance conditioning and spatial learning paradigms, and an activity-dependent, competitive stabilization of synapses from this supernumerary population contributes to the evolving memory trace.3335 These spine density changes have been linked to natural neuronal activity during behavior within the hippocampal circuitry that is active during learning.36

A caveat to be noted is that enduring forms of LTP may also be associated with a proliferation of spines and, conversely, LTD is associated with spine elimination.3739 Such observations have given rise to the “synapse tagging and capture” hypothesis.40 This hypothesis suggests that LTP identifies synapses in a manner that allows directed delivery of plasticity-related proteins that give rise to increased size and shape of the synapses and/or growth of new synapses within a given cell assembly. In contrast, induction of LTD prevents delivery of plasticity-related proteins and is associated with shrinkage of synapses and their possible retraction from the neural circuit.

Two types of synapse manipulation may therefore be discerned. The first is a form of plasticity in which the strength of existing cell synapses is retuned to give rise to the cell assembly hypothesis in which networks are distinguished by the composition of the cells that are coactivated. In the second discernment, the synapse assembly hypothesis suggests that new synapses are created by experience and incorporated into the network, while the redundant supernumerary synapses are eliminated by a pruning mechanism. The latter allows for elaboration of a network of specific groups of novel synapses with a connectivity scheme that has been optimized for each experience.41

Cell adhesion molecules and learning-induced synapse remodeling

Antibodies directed to cell adhesion molecules located in the synapse, such as integrins and those characterized by immunoglobulin-like domains, have proved useful in understanding the temporal mechanisms underpinning learning-associated memory formation. Cell adhesion molecules have been shown to be crucial to the induction and maintenance of LTP and the consolidation of avoidance conditioning and spatial learning paradigms.42,43 Cell adhesion molecules, such as the neural cell adhesion molecule (NCAM), exhibit a unique temporal activity pattern in that they are functionally required during acquisition of information (training, 0 hours) and later in the process of memory consolidation (6–8 hours) when synapses are transiently produced following learning.33,34,44

A significant post-translational modification of NCAM involves the attachment of extended homopolymers of alpha-2,8-linked polysialic acid (PSA).45 NCAM polysialylation appears necessary for activity-dependent synapse remodeling and becomes transiently increased in the infragranular zone of the hippocampal dentate gyrus in the 10–24-hour period following training in a variety of tasks.4649 This late functional requirement of NCAM PSA may contribute to elimination of the supernumerary synapses generated in the 6–8-hour post-training period of memory consolidation. Most of the newly synthesized PSA generated during memory formation is associated with the synapse-specific NCAM 180 kDa isoform.47 The consequence of this modification with chains of negatively charged polysialic acid is impaired NCAM-NCAM homophilic binding, reduced cell-cell signaling, and the potential to facilitate synapse remodeling.45 Specifically, NCAM PSA appears to modulate glutamatergic transmission through the N-Methyl-D-Aspartate (NMDA) receptor subtype 2B (NR2B), and the restraint imposed by polysialylation on cell-cell signaling is a likely mechanism for the eventual elimination of redundant synapses from the populations transiently produced during memory acquisition and consolidation.50

Not surprisingly, the cell adhesion molecule-based mechanism(s) necessary for circuit reconfiguration during memory formation requires synthesis of growth factor protein. Coincident with the 12-hour post-training increase in polysialylated NCAM, brain-derived neurotrophic factor becomes necessary for memory consolidation, and upregulation of its biosynthesis is mediated by activation of the dopamine D1 receptor.23 Modulation of dopaminergic function requires activation of NMDA receptors in the VTA to establish persistent behaviors, and it is this mechanism that controls the enhancement of brain-derived neurotrophic factor expression at the 12-hour post-training time.23,51 Thus, the control exerted on memory consolidation requires the VTA-hippocampus loop and this directly links phasic firing of midbrain dopaminergic neurons to the synapse remodeling underpinning associative learning of motivationally relevant experiences.

Degenerate synapse assemblies

Pair-associated learning of spatial and reward signals is traditionally accepted as being initiated within the hippocampus and later stabilized in neocortical ensembles.52 Within the cortex these individual ensembles, or modules, are reciprocally interconnected by re-entrant networks of excitatory axons that modulate the arousal level of the brain and the distributed patterns of re-entrant activity that inhibit, suppress, or compete with conflicting alternative response patterns.53 This process facilitates interleaving of novel information between the hippocampal and neocortical ensembles in a manner that is specific to each individual. As a consequence, the modules being selected for information storage are likely to be degenerate, meaning that different assemblies may have the ability to provide a similar behavioral output in a decision-making process. Degeneracy is a feature of many aspects of biological function. It is a prominent property of gene and neural networks and an essential aspect of selectional systems, such as synapse assemblies within the cortical modules.5 However, providing evidence to support a role for degeneracy in behavioral modification is daunting.

The synthetic approach of constructing brain-based devices that autonomously learn to categorize signals from the environment without prior instructions has supported a role for degeneracy in information processing. Brain-based devices, containing visual and head-direction systems, a “hippocampal formation” and “basal forebrain” and an action or selection system associated with a value or reward system, have been developed.54 In these devices, potentiation or depression of plastic connections signals a reward value through the implementation of a temporal difference rule, as described by Sutton and Barto.6 This allows sensory input to be processed, the connection strengths of the plastic “synapses” determined, and the generated motor output assessed. The outcomes of individual iterations indicate that brain-based devices operate as degenerate systems because structurally different assemblies yield similar “behavioral” outcomes. Therefore, it is not unreasonable to expect degeneracy in the neuronal assemblies serving perception and memory. Degeneracy provides a fail-safe system; if one assembly fails another will work. Further, change in the sensory input signals will likely alter the extent of overlap between the contributing circuits, the nature of the associations, and the resultant action outputs.

Schemas and efficient decision-making

The consolidation of pair-associated learning in neocortical assemblies is generally regarded as being a very slow process and not at all consistent with the temporal dynamics required for efficient decision-making.52 However, recent evidence suggests that new memories can undergo a much more rapid form of learning, and consolidation providing the information for storage is assimilated into pre-existing knowledge assemblies called schemas.25 Rodents form schemas to find food reward locations, and new associations, such as spatial information, may be added in a single trial.55 Hippocampal learning of reward associations in new environments is much more gradual. Within this framework the hippocampus can employ schemas to speed the assimilation and consolidation of new information through hippocampal-cortical re-entrant networks into preformed memory assemblies. Thus, pre-existing assemblies are altered, as is their associations with other assemblies that maintain similarities and differences in a relational network of stored information that becomes active during memory recall. Thus, consolidation and reconsolidation of new information into these networks serves to continually update and renew schemas.56 Such schemas have the potential to provide the behavioral modifications necessary for efficient decision-making.

Final comments

A neural system of decision-making requires answering the question of how subjective values appended to the decisions under consideration are learned, stored, and represented. It is hoped that the framework presented here will provide a starting point. This review envisions the hippocampus as being critically involved in the rapid encoding of associations between stimuli and context and links such episodes into a relational pattern that allows inference through recall of previously stored representations of behavioral consequences across a diverse range of responses. Hippocampal activity is viewed therefore as a seamless and automatic representation of experiences, both rare and common, that are encoded as events defining both rare experiences and common stimuli and places that are interleaved across episodes.

Disclosure

The author is unaware of any affiliations, funding, or financial considerations that might influence the objectivity of this review.


References

1.

Pavlov IP. Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. Oxford, UK: Oxford University Press (Dover edition, 2003); 1927.

2.

Hebb DO. The Organization of Behaviour: A Neuropsychological Theory. New York, NY, USA: John Wiley & Sons; 1949.

3.

Bliss TV, Lømo T. Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J Physiol. 1973;232:331–356.

4.

Fodor J. The Modularity of Mind. Cambridge, MA, USA: MIT Press; 1983.

5.

Edelman GM, Gally JA. Degeneracy and complexity in biological systems. Proc Natl Acad Sci U S A. 2001;98:13763–13768.

6.

Sutton R, Barto A. Reinforcement Learning. Cambridge, MA, USA: MIT Press; 1998.

7.

Montague PR, Dayan P, Sejnowski TJ. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci. 1996;16:1936–1947.

8.

Glimcher PW. Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proc Natl Acad Sci U S A. 2011;108 Suppl 3:15647–15654.

9.

Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599.

10.

Howe MW, Tierney PL, Sandberg SG, Phillips PE, Graybiel AM. Prolonged dopamine signalling in striatum signals proximity and value of distant rewards. Nature. 2013;500:575–579.

11.

Skaggs WE, McNaughton BL, Wilson MA, Barnes CA. Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus. 1996;6:149–172.

12.

O’Keefe J, Recce ML. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus. 1993;3:317–330.

13.

Bast T, Feldon J. Hippocampal modulation of sensorimotor processes. Prog Neurobiol. 2003;70:319–345.

14.

Ito R, Robbins TW, Pennartz CM, Everitt BJ. Functional interaction between the hippocampus and nucleus accumbens shell is necessary for the acquisition of appetitive spatial context conditioning. J Neurosci. 2008;28:6950–6959.

15.

van der Meer MA, Redish AD. Theta phase precession in rat ventral striatum links place and reward information. J Neurosci. 2011;31:2843–2854.

16.

Legault M, Wise RA. Novelty-evoked elevations of nucleus accumbens dopamine: dependence on impulse flow from the ventral subiculum and glutamatergic neurotransmission in the ventral tegmental area. Eur J Neurosci. 2001;13:819–828.

17.

Luo AH, Tahsili-Fahadan P, Wise RA, Lupica CR, Aston-Jones G. Linking context with reward: a functional circuit from hippocampal CA3 to ventral tegmental area. Science. 2011;333:353–357.

18.

Li S, Cullen WK, Anwyl R, Rowan MJ. Dopamine-dependent facilitation of LTP induction in hippocampal CA1 by exposure to spatial novelty. Nat Neurosci. 2003;6:526–531.

19.

Rolls ET, Kesner RP. A computational theory of hippocampal function, and empirical tests of the theory. Prog Neurobiol. 2006;79:1–48.

20.

Chen Z, Ito K, Fujii S, et al. Roles of dopamine receptors in long-term depression: enhancement via D1 receptors and inhibition via D2 receptors. Receptors Channels. 1996;4:1–8.

21.

Calabresi P, Maj R, Pisani A, Mercuri NB, Bernardi G. Long-term synaptic depression in the striatum: physiological and pharmacological characterization. J Neurosci. 1992;12:4224–4233.

22.

Doyle E, Regan CM. Cholinergic and dopaminergic agents which inhibit a passive avoidance response attenuate paradigm-specific increases in NCAM sialylation state. J Neural Transm. 1993;92:33–49.

23.

Rossato JI, Bevilaqua LR, Izquierdo I, Medina JH, Cammarota M. Dopamine controls persistence of long-term memory storage. Science. 2009;325:1017–1020.

24.

Gerstner W, Sprekeler H, Deco G. Theory and simulation in neuroscience. Science. 2012;338:60–65.

25.

McKenzie S, Robinson NT, Herrera L, Churchill JC, Eichenbaum H. Learning causes reorganization of neuronal firing patterns to represent related experiences within a hippocampal schema. J Neurosci. 2013;33:10243–10256.

26.

Toni N, Buchs PA, Nikonenko I, Bron CR, Muller D. LTP promotes formation of multiple spine synapses between a single axon terminal and a dendrite. Nature. 1999;402:421–425.

27.

Bailey CH, Kandel ER. Structural changes accompanying memory storage. Annu Rev Physiol. 1993;55:397–426.

28.

Moser MB, Trommald M, Andersen P. An increase in dendritic spine density on hippocampal CA1 pyramidal cells following spatial learning in adult rats suggests the formation of new synapses. Proc Natl Acad Sci U S A. 1994;91:12673–12675.

29.

Yang G, Pan F, Gan W-B. Stably maintained dendritic spines are associated with lifelong memories. Nature. 2009;462:920–924.

30.

Marrone DF. Ultrastructural plasticity associated with hippocampal-dependent learning: a meta-analysis. Neurobiol Learn Mem. 2007;87:361–371.

31.

Holtmaat A, Wilbrecht L, Knott GW, Welker E, Svoboda K. Experience-dependent and cell-type-specific spine growth in the neocortex. Nature. 2006;441:979–983.

32.

Xu T, Yu X, Perlik AJ, et al. Rapid formation and selective stabilization of synapses for enduring motor memories. Nature. 2009;462:915–919.

33.

O’Malley A, O’Connell C, Regan CM. Ultrastructural analysis reveals avoidance conditioning to induce a transient increase in hippocampal dentate spine density in the 6h post-training period of consolidation. Neuroscience. 1998;87:607–613.

34.

O’Malley A, O’Connell C, Murphy KJ, Regan CM. Transient spine density increases in the mid-molecular layer of hippocampal dentate gyrus accompany consolidation of a spatial learning task in the rodent. Neuroscience. 2000;99:229–232.

35.

Doyle E, Nolan PM, Bell R, Regan CM. Neurodevelopmental events underlying information acquisition and storage. Network. 1992;3:89–94.

36.

Kitanishi T, Ikegaya Y, Matsuki N, Yamada MK. Experience-dependent, rapid structural changes in hippocampal pyramidal cell spines. Cereb Cortex. 2009;19:2572–2578.

37.

Trommald M, Hulleberg G, Andersen P. Long-term potentiation is associated with new excitatory spine synapses on rat dentate granule cells. Learn Mem. 1996;3:218–228.

38.

Nagerl UV, Eberhorn N, Cambridge SB, Bonhoeffer T. Bidirectional activity-dependent morphological plasticity in hippocampal neurons. Neuron. 2004;44:759–767.

39.

Bastrikova N, Gardner GA, Reece JM, Jeromin A, Dudek SM. Synapse elimination accompanies functional plasticity in hippocampal neurons. Proc Natl Acad Sci U S A. 2008;105:3123–3127.

40.

Redondo RL, Morris RGM. Making memories last: the synaptic tagging and capture hypothesis. Nat Rev Neurosci. 2011;12:17–30.

41.

Ziv NE, Garner CC. Cellular and molecular mechanisms of presynaptic assembly. Nat Rev Neurosci. 2004;5:385–399.

42.

Lüthi A, Laurent JP, Figurov A, Muller D, Schachner M. Hippocampal long-term potentiation and neural cell adhesion molecules L1 and NCAM. Nature. 1994;372:777–779.

43.

Arami S, Jucker M, Schachner M, Welzl H. The effect of continuous intraventricular infusion of L1 and NCAM antibodies on spatial learning in rats. Behav Brain Res. 1996;81:81–87.

44.

Foley AG, Hartz BP, Gallagher HC, et al. A synthetic peptide ligand of NCAM Ig1 domain prevents NCAM internalization and disrupts passive avoidance learning. J Neurochem. 2000;74:2607–2613.

45.

Rutishauser U. Polysialic acid in the plasticity of the developing and adult vertebrate nervous system. Nat Rev Neurosci. 2008;9:26–35.

46.

Hoyk ZS, Parducz A, Theodosis DT. The highly sialylated isoform of the neural cell adhesion molecule is required for estradiol-induced morphological plasticity in the adult arcuate nucleus. Eur J Neurosci. 2001;13:649–656.

47.

Doyle E, Bell R, Regan CM. Hippocampal NCAM180 transiently increases sialylation during the acquisition and consolidation of a passive avoidance response in the adult rat. J Neurosci Res. 1992;31:513–523.

48.

Fox GB, O’Connell AW, Murphy KJ, Regan CM. Memory consolidation induces a transient and time-dependent increase in the frequency of neural cell adhesion molecule polysialylated cells in the adult rat hippocampus. J Neurochem. 1995;65:2796–2799.

49.

Murphy KJ, O’Connell AW, Regan CM. Repetitive and transient increases in hippocampal neural cell adhesion molecule polysialylation state following multi-trial spatial training. J Neurochem. 1996;67:1268–1274.

50.

Kochlamazashvili G, Senkov O, Grebenyuk S, et al. Neural cell adhesion molecule-associated polysialic acid regulates synaptic plasticity and learning by restraining the signaling through gluN2B-containing NMDA receptors. J. Neurosci. 2010;30:4171–4183.

51.

Chergui K, Charléty PJ, Akaoka H, et al. Tonic activation of NMDA receptors causes spontaneous burst discharge of rat midbrain dopamine neurons in vivo. Eur J Neurosci. 1993;5:137–144.

52.

Alvarez P, Squire LR. Memory consolidation and the medial temporal lobe: a simple network model. Proc Natl Acad Sci U S A. 1994;91:7041–7045.

53.

Edelman GM, Gally JA. Reentry: a key mechanism for integration of brain function. Front Integr Neurosci. 2013;7:63.

54.

Krichmar JL, Nitz DA, Gally JA, Edelman GM. Characterizing functional hippocampal pathways in a brain-based device as it solves a spatial memory task. Proc Natl Acad Sci U S A. 2005;102:2111–2116.

55.

Tse D, Langston RF, Kakeyama M, et al. Schemas and memory consolidation. Science. 2007;316:76–82.

56.

Dudai Y. The restless engram: consolidations never end. Annu Rev Neurosci. 2012;35:227–247.

Creative Commons License © 2014 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.