Press "Enter" to skip to content

Science

Humans adapt rationally to approximate estimates of uncertainty Efficient learning requires estimation of, and adaptation to, different forms of uncertainty. If uncertainty is caused by randomness in outcomes (noise), observed events should have less influence on beliefs, whereas if uncertainty is caused by a change in the process being estimated (volatility) the influence of events should increase. Previous work has demonstrated that humans respond appropriately to changes in volatility, but there is less evidence of a rational response to noise. Here, we test adaptation to variable levels of volatility and noise in human participants, using choice behaviour and pupillometry as a measure of the central arousal system. We find that participants adapt as expected to changes in volatility, but not to changes in noise. Using a Bayesian observer model, we demonstrate that participants are, in fact, adapting to estimated noise, but that their estimates are imprecise, leading them to misattribute it as volatility and thus to respond inappropriately.

On the bright side of blindsight. Considerations from new observations of awareness in a blindsight patient Blindsight refers to the ability to make accurate visual discriminations without conscious awareness of the stimuli. In this study, we present new evidence from naturalistic observations of a patient with bilateral damage to the striate cortex, who surprisingly demonstrated the ability to detect colored objects, particularly red ones. Despite the slow and effortful process, the patient reported full awareness of the color aspect of the stimuli. These observations cannot be explained by traditional concepts of type 1 or type 2 blindsight, raising intriguing questions about the boundaries between objective and subjective blindness, as well as the nature of visual experience and epistemic agency. Moreover, these findings underscore the significant role that blindsight could play in future research, especially in understanding how higher cortical functions are involved in emotions and feelings. This highlights the necessity for further exploration to better understand the visual features that contribute to the phenomenon of affective blindsight.

Color and Spatial Frequency Provide Functional Signatures of Retinotopic Visual Areas Primate vision relies on retinotopically organized cortical parcels defined by representations of hemifield (upper vs lower visual field), eccentricity (fovea vs periphery), and area (V1, V2, V3, V4). Here we test for functional signatures of these organizing principles. We used functional magnetic resonance imaging to measure responses to gratings varying in spatial frequency, color, and saturation across retinotopically defined parcels in two macaque monkeys, and we developed a Sparse Supervised Embedding (SSE) analysis to identify stimulus features that best distinguish cortical parcels from each other. Constraining the SSE model to distinguish just eccentricity representations of the voxels revealed the expected variation of spatial frequency and S-cone modulation with eccentricity. Constraining the model according to the dorsal/ventral location and retinotopic area of each voxel provided unexpected functional signatures, which we investigated further with standard univariate analyses. Posterior parcels (V1) were distinguished from anterior parcels (V4) by differential responses to chromatic and luminance contrast, especially of low-spatial-frequency gratings. Meanwhile, ventral parcels were distinguished from dorsal parcels by differential responses to chromatic and luminance contrast, especially of colors that modulate all three cone types. The dorsal/ventral asymmetry not only resembled differences between candidate dorsal and ventral subdivisions of human V4 but also extended to include all retinotopic visual areas, starting in V1 and increasing from V1 to V4. The results provide insight into the functional roles of different retinotopic areas and demonstrate the utility of SSE as a data-driven tool for generating hypotheses about cortical function and behavior.

Heterogeneity in category recognition across the visual field Visual information emerging from the extrafoveal locations is important for visual search, saccadic eye movement control, and spatial attention allocation. Our everyday sensory experience with visual object categories varies across different parts of the visual field which may result in location-contingent variations in visual object recognition. We used a body, animal body, and chair two-forced choice object category recognition task to investigate this possibility. Animal body and chair images with various levels of visual ambiguity were presented at the fovea and different extrafoveal locations across the vertical and horizontal meridians. We found heterogeneous body and chair category recognition across the visual field. Specifically, while the recognition performance of the body and chair presented at the fovea were similar, it varied across different extrafoveal locations. The largest difference was observed when the body and chair images were presented at the lower-left and upper-right visual fields, respectively. The lower/upper visual field bias of the body/chair recognition was particularly observed in low/high stimulus visual signals. Finally, when subjects’ performances were adjusted for a potential location-contingent decision bias in category recognition by subtracting the category detection in full noise condition, location-dependent category recognition was observed only for the body category. These results suggest heterogeneous body recognition bias across the visual field potentially due to more frequent exposure of the lower visual field to body stimuli.

Metacognition in Motion: The Interplay Between Motor Evidence and Visual Information in Shaping Sensorimotor Confidence This study examines the role of internal motor signals and visual information in the detection of and confidence in Partial-errors (PEs), subtle endogenous motor corrections. Using electromyographic (EMG) recordings, we captured motor activations during a conflict task in which participants reported the presence of PEs and rated their confidence. Two experiments were conducted: Experiment 1 provided visible visual conflict through supraliminal primes, while Experiment 2 reduced visual feedback using subliminal primes. In both Experiments, participants demonstrated limited PE detection and above-chance metacognitive efficiency. Notably, when participants reported the absence of a PE, confidence was lower when a PE was actually present (unaware PE) compared to when there was no PE (correct rejection), suggesting implicit sensitivity to motor activation. Detection and confidence were systematically influenced by motor signals, with larger PE amplitudes and longer correction times leading to higher detection rates and confidence levels. However, a metacognitive bias emerged: confidence was paradoxically lower for detected PEs than for undetected ones, despite strong motor evidence. Visual information modulated the reliance on motor signals. In Experiment 2, where subliminal priming reduced visual feedback, motor signals had a more pronounced influence on both detection and confidence. These findings highlight the complementary roles of internal motor signals and external visual information in shaping sensorimotor confidence.

Attention when you need Being attentive to task-relevant features can improve task performance, but paying attention comes with its own metabolic cost. Therefore, strategic allocation of attention is crucial in performing the task efficiently. This work aims to understand this strategy. Recently, de Gee et al. conducted experiments involving mice performing an auditory sustained attention-value task. This task required the mice to exert attention to identify whether a high-order acoustic feature was present amid the noise. By varying the trial duration and reward magnitude, the task allows us to investigate how an agent should strategically deploy their attention to maximize their benefits and minimize their costs. In our work, we develop a reinforcement learning-based normative model of the mice to understand how it balances attention cost against its benefits. The model is such that at each moment the mice can choose between two levels of attention and decide when to take costly actions that could obtain rewards. Our model suggests that efficient use of attentional resources involves alternating blocks of high attention with blocks of low attention. In the extreme case where the agent disregards sensory input during low attention states, we see that high attention is used rhythmically. Our model provides evidence about how one should deploy attention as a function of task utility, signal statistics, and how attention affects sensory evidence.

Dialogue mechanisms between astrocytic and neuronal networks: A whole-brain modelling approach Astrocytes critically shape whole-brain structure and function by forming extensive gap junctional networks that intimately and actively interact with neurons. Despite their importance, existing computational models of whole-brain activity ignore the roles of astrocytes while primarily focusing on neurons. Addressing this oversight, we introduce a biophysical neural mass network model, designed to capture the dynamic interplay between astrocytes and neurons via glutamatergic and GABAergic transmission pathways. This network model proposes that neural dynamics are constrained by a two-layered structural network interconnecting both astrocytic and neuronal populations, allowing us to investigate astrocytes’ modulatory influences on whole-brain activity and emerging functional connectivity patterns. By developing a simulation methodology, informed by bifurcation and multilayer network theories, we demonstrate that the dialogue between astrocytic and neuronal networks manifests over fast–slow fluctuation mechanisms as well as through phase–amplitude connectivity processes. The findings from our research represent a significant leap forward in the modeling of glial-neuronal collaboration, promising deeper insights into their collaborative roles across health and disease states.

Subregions in the ventromedial prefrontal cortex integrate threat and protective information to meta-represent safety Pivotal to self-preservation is the ability to identify when we are safe and when we are in danger. Previous studies have focused on safety estimations based on the features of external threats and do not consider how the brain integrates other key factors, including estimates about our ability to protect ourselves. Here, we examine the neural systems underlying the online dynamic encoding of safety. The current preregistered study used 2 novel tasks to test 4 facets of safety estimation: Safety PredictionMeta-representationRecognition, and Value Updating. We experimentally manipulated safety estimation changing both levels of external threats and self-protection. Data were collected in 2 independent samples (behavioral N = 100; MRI N = 30). We found consistent evidence of subjective changes in the sensitivity to safety conferred through protection. Neural responses in the ventromedial prefrontal cortex (vmPFC) tracked increases in safety during all safety estimation facets, with specific tuning to protection. Further, informational connectivity analyses revealed distinct hubs of safety coding in the posterior and anterior vmPFC for external threats and protection, respectively. These findings reveal a central role of the vmPFC for coding safety.

created by https://andyadkins.com