Free Time, Sharper Mind: A Computational Dive into Working Memory Improvement Extra free time improves working memory (WM) performance. This free-time benefit becomes larger across successive serial positions, a phenomenon recently labeled the “fanning-out effect”. Different mechanisms can account for this phenomenon. In this study, we implemented these mechanisms computationally and tested them experimentally. We ran three experiments that varied the time people were allowed to encode items, as well as the order in which they recalled them. Experiment 1 manipulated the free-time benefit in a paradigm in which people recalled items either in forward or backward order. Experiment 2 used the same forward-backward recall paradigm coupled with a distractor task at the end of encoding. Experiment 3 used a cued recall paradigm in which items were tested in random order. In all three experiments, the best-fitting model of the free-time benefit included (1) a consolidation mechanism whereby a just-encoded item continues to be re-encoded as a function of the total free-time available and (2) a stabilization mechanism whereby items become more resistant to output interference with extra free time. Mechanisms such as decay and refreshing, as well as models based on the replenishment of encoding-resources, were not supported by our data.
Competitive plasticity to reduce the energetic costs of learning The brain is not only constrained by energy needed to fuel computation, but it is also constrained by energy needed to form memories. Experiments have shown that learning simple conditioning tasks which might require only a few synaptic updates, already carries a significant metabolic cost. Yet, learning a task like MNIST to 95% accuracy appears to require at least 108 synaptic updates. Therefore the brain has likely evolved to be able to learn using as little energy as possible. We explored the energy required for learning in feedforward neural networks. Based on a parsimonious energy model, we propose two plasticity restricting algorithms that save energy: 1) only modify synapses with large updates, and 2) restrict plasticity to subsets of synapses that form a path through the network. In biology networks are often much larger than the task requires, yet vanilla backprop prescribes to update all synapses. In particular in this case, large savings can be achieved while only incurring a slightly worse learning time. Thus competitively restricting plasticity helps to save metabolic energy associated to synaptic plasticity. The results might lead to a better understanding of biological plasticity and a better match between artificial and biological learning. Moreover, the algorithms might benefit hardware because also electronic memory storage is energetically costly.
Reinforcement learning when your life depends on it: A neuro-economic theory of learning Synaptic plasticity enables animals to adapt to their environment, but memory formation can require a substantial amount of metabolic energy, potentially impairing survival. Hence, a neuro-economic dilemma arises whether learning is a profitable investment or not, and the brain must therefore judiciously regulate learning. Indeed, in experiments it was observed that during starvation, Drosophila suppress formation of energy-intensive aversive memories. Here we include energy considerations in a reinforcement learning framework. Simulated flies learned to avoid noxious stimuli through synaptic plasticity in either the energy expensive long-term memory (LTM) pathway, or the decaying anesthesia-resistant memory (ARM) pathway. The objective of the flies is to maximize their lifespan, which is calculated with a hazard function. We find that strategies that switch between the LTM and ARM pathways, based on energy reserve and reward prediction error, prolong lifespan. Our study highlights the significance of energy-regulation of memory pathways and dopaminergic control for adaptive learning and survival. It might also benefit engineering applications of reinforcement learning under resources constraints.
Learning-dependent gating of hippocampal inputs by frontal interneurons The hippocampus is a brain region that is essential for the initial encoding of episodic memories. However, the consolidation of these memories is thought to occur in the neocortex, under guidance of the hippocampus, over the course of days and weeks. Communication between the hippocampus and the neocortex during hippocampal sharp wave-ripple oscillations is believed to be critical for this memory consolidation process. Yet, the synaptic and circuit basis of this communication between brain areas is largely unclear. To address this problem, we perform in vivo whole-cell patch-clamp recordings in the frontal neocortex and local field potential recordings in CA1 of head-fixed mice exposed to a virtual-reality environment. In mice trained in a goal-directed spatial task, we observe a depolarization in frontal principal neurons during hippocampal ripple oscillations. Both this ripple-associated depolarization and goal-directed task performance can be disrupted by chemogenetic inactivation of somatostatin-positive (SOM+) interneurons. In untrained mice, a ripple-associated depolarization is not observed, but it emerges when frontal parvalbumin-positive (PV+) interneurons are inactivated. These results support a model where SOM+ interneurons inhibit PV+ interneurons during hippocampal activity, thereby acting as a disinhibitory gate for hippocampal inputs to neocortical principal neurons during learning.
Cortical beta oscillations map to shared brain networks modulated by dopamine Brain rhythms can facilitate neural communication for the maintenance of brain function. Beta rhythms (13–35 Hz) have been proposed to serve multiple domains of human ability, including motor control, cognition, memory and ewmotion, but the overarching organisational principles remain unknown. To uncover the circuit architecture of beta oscillations, we leverage normative brain data, analysing over 30 hours of invasive brain signals from 1772 channels from cortical areas in epilepsy patients, to demonstrate that beta is the most distributed cortical brain rhythm. Next, we identify a shared brain network from beta dominant areas with deeper brain structures, like the basal ganglia, by mapping parametrised oscillatory peaks to whole-brain functional and structural MRI connectomes. Finally, we show that these networks share significant overlap with dopamine uptake as indicated by positron emission tomography. Our study suggests that beta oscillations emerge in cortico-subcortical brain networks that are modulated by dopamine. It provides the foundation for a unifying circuit-based conceptualisation of the functional role of beta activity beyond the motor domain and may inspire an extended investigation of beta activity as a feedback signal for closed-loop neurotherapies for dopaminergic disorders.
Dissecting neural correlates of theory of mind and executive functions in behavioral variant frontotemporal dementia Behavioral variant frontotemporal dementia (bvFTD) is characterized by profound and early deficits in social cognition (SC) and executive functions (EF). To date it remains unclear whether deficits of the respective cognitive domains are based on the degeneration of distinct brain regions. In 103 patients with a diagnosis of bvFTD (possible/probable/definite: N = 40/58/5) from the frontotemporal lobar degeneration (FTLD) consortium Germany cohort (age 62.5±9.4 years, gender 38 female/65 male) we applied multimodal structural imaging, i.e. voxel-based morphometry, cortical thickness (CTH) and networks of structural covariance via source based morphometry. We cross-sectionally investigated associations with performance in a modified Reading the Mind in the Eyes Test (RMET; reflective of theory of mind – ToM) and five different tests reflective of EF (i.e. Hamasch-Five-Point Test, semantic and phonemic Fluency, Trail Making Test, Stroop interference). Finally, we investigated the conjunction of RMET correlates with functional networks commonly associated with SC respectively ToM and EF as extracted meta-analytically within the Neurosynth database. RMET performance was mainly associated with gray matter volume (GMV) and CTH within temporal and insular cortical regions and less within the prefrontal cortex (PFC), whereas EF performance was mainly associated with prefrontal regions (GMV and CTH). Overlap of RMET and EF associations was primarily located within the insula, adjacent subcortical structures (i.e. putamen) and the dorsolateral PFC (dlPFC). These patterns were more pronounced after adjustment for the respective other cognitive domain. Corroborative results were obtained in analyses of structural covariance networks. Overlap of RMET with meta-analytically extracted functional networks commonly associated with SC, ToM and EF was again primarily located within the temporal and insular region and the dlPFC. In addition, on a meta-analytical level, strong associations were found for temporal cortical RMET correlates with SC and ToM in particular. These data indicate a temporo-frontal dissociation of bvFTD related disturbances of ToM and EF, with atrophy of the anterior temporal lobe being critically involved in ToM deficits. The consistent overlap within the insular cortex may be attributable to the multimodal and integrative role of this region in socioemotional and cognitive processing.
Reconciliation of weak pairwise spike–train correlations and highly coherent local field potentials across space Multi-electrode arrays covering several square millimeters of neural tissue provide simultaneous access to population signals such as extracellular potentials and spiking activity of one hundred or more individual neurons. The interpretation of the recorded data calls for multiscale computational models with corresponding spatial dimensions and signal predictions. Multi-layer spiking neuron network models of local cortical circuits covering about 1mm2 have been developed, integrating experimentally obtained neuron-type-specific connectivity data and reproducing features of observed in-vivo spiking statistics. Local field potentials can be computed from the simulated spiking activity. We here extend a local network and local field potential model to an area of 4×4mm2, preserving the neuron density and introducing distance-dependent connection probabilities and conduction delays. We find that the upscaling procedure preserves the overall spiking statistics of the original model and reproduces asynchronous irregular spiking across populations and weak pairwise spike–train correlations in agreement with experimental recordings from sensory cortex. Also compatible with experimental observations, the correlation of local field potential signals is strong and decays over a distance of several hundred micrometers. Enhanced spatial coherence in the low-gamma band around 50Hz may explain the recent report of an apparent band-pass filter effect in the spatial reach of the local field potential.
Evolving choice hysteresis in reinforcement learning: comparing the adaptive value of positivity bias and gradual perseveration The tendency of repeating past choices more often than expected from the history of outcomes has been repeatedly empirically observed in reinforcement learning experiments. It can be explained by at least two computational processes: asymmetric update and (gradual) choice perseveration. A recent meta-analysis showed that both mechanisms are detectable in human reinforcement learning. However, while their descriptive value seems to be well established, they have not been compared regarding their possible adaptive value. In this study, we address this gap by simulating reinforcement learning agents in a variety of environments with a new variant of an evolutionary algorithm. Our results show that positivity bias (in the form of asymmetric update) is evolutionary stable in many situations, while the emergence of gradual perseveration is less systematic and robust. Overall, our results illustrate that biases can be adaptive and selected by evolution, in an environment-specific manner.