Press "Enter" to skip to content

Science

Qualia and the Formal Structure of Meaning This work explores the hypothesis that subjectively attributed meaning constitutes the phenomenal content of conscious experience. That is, phenomenal content is semantic. This form of subjective meaning manifests as an intrinsic and non-representational character of qualia. Empirically, subjective meaning is ubiquitous in conscious experiences. We point to phenomenological studies that lend evidence to support this. Furthermore, this notion of meaning closely relates to what Frege refers to as “sense”, in metaphysics and philosophy of language. It also aligns with Peirce’s “interpretant”, in semiotics. We discuss how Frege’s sense can also be extended to the raw feels of consciousness. Sense and reference both play a role in phenomenal experience. Moreover, within the context of the mind-matter relation, we provide a formalization of subjective meaning associated to one’s mental representations. Identifying the precise maps between the physical and mental domains, we argue that syntactic and semantic structures transcend language, and are realized within each of these domains. Formally, meaning is a relational attribute, realized via a map that interprets syntactic structures of a formal system within an appropriate semantic space. The image of this map within the mental domain is what is relevant for experience, and thus comprises the phenomenal content of qualia. We conclude with possible implications this may have for experience-based theories of consciousness.

Developments in event conceptualisation and event integration in language and mind This essay is the introduction to the Special Issue ‘Events in language and mind: Theoretical and empirical advances in the event integration theory’. We first review Leonard Talmy’s event integration theory in addition to some critiques of this framework. Following this, we point to some empirical research inspired by this framework, which explores the interaction between language and cognition. We then briefly introduce the papers in this volume and discuss their contributions to the event integration framework. We conclude with some limitations, questions and future directions.

Epistemic language in news headlines shapes readers’ perceptions of objectivity How we reason about objectivity—whether an assertion has a ground truth—has implications for belief formation on wide-ranging topics. For example, if someone perceives climate change to be a matter of subjective opinion similar to the best movie genre, they may consider empirical claims about climate change as mere opinion and irrelevant to their beliefs. Here, we investigate whether the language employed by journalists might influence the perceived objectivity of news claims. Specifically, we ask whether factive verb framing (e.g., “Scientists know climate change is happening”) increases perceived objectivity compared to nonfactive framing (e.g., “Scientists believe […]”). Across eight studies (N = 2,785), participants read news headlines about unique, noncontroversial topics (studies 1a–b, 2a–b) or a familiar, controversial topic (climate change; studies 3a–b, 4a–b) and rated the truth and objectivity of the headlines’ claims. Across all eight studies, when claims were presented as beliefs (e.g., “Tortoise breeders believe tortoises are becoming more popular pets”), people consistently judged those claims as more subjective than claims presented as knowledge (e.g., “Tortoise breeders know…”), as well as claims presented as unattributed generics (e.g., “Tortoises are becoming more popular pets”). Surprisingly, verb framing had relatively little, inconsistent influence over participants’ judgments of the truth of claims. These results demonstrate how, apart from shaping whether we believe a claim is true or false, epistemic language in media can influence whether we believe a claim has an objective answer at all.

Statistical Relationships Between Surface Form and Sensory Meanings of English Words Influence Lexical Processing Across spoken languages, there are some words whose acoustic features resemble the meanings of their referents by evoking perceptual imagery, i.e., they are iconic (e.g., in English, “splash” imitates the sound of an object hitting water). While these sound symbolic form-meaning relationships are well-studied, relatively little work has explored whether the sensory properties of English words also involve systematic (i.e., statistical) form-meaning mappings. We first test the prediction that surface form properties can predict sensory experience ratings for over 5,000 monosyllabic and disyllabic words (Juhasz & Yap, 2013), confirming they explain a significant proportion of variance. Next, we show that iconicity and sensory form typicality, a statistical measure of how well a word’s form aligns with its sensory experience rating, are only weakly related to each other, indicating they are likely to be distinct constructs. To determine whether form typicality influences processing of sensory words, we conducted regression analyses using lexical decision, word recognition, naming and semantic decision tasks from behavioral megastudy data sets. Across the data sets, sensory form typicality was able to predict more variance in performance than sensory experience or iconicity ratings. Further, the effects of typicality were consistently inhibitory in comprehension (i.e., more typical forms were responded to more slowly and less accurately), whereas for production the effect was facilitatory. These findings are the first evidence that systematic form-meaning mappings in English sensory words influence their processing. We discuss how language processing models incorporating Bayesian prediction mechanisms might be able to account for form typicality in the lexicon.

Differential attentional demands on implicit and explicit associative memory in children 8-12 years old Associative memory improves during childhood, suggesting an age-related improvement in the binding mechanism responsible for linking information together. However, tasks designed to measure associative memory not only measure binding, but also place demands on attention. This makes it difficult to dissociate age-related improvements in memory from the development of attention. One way to reduce attentional demands is to test memory implicitly versus explicitly. In this study, children (8-, 10-, and 12-years-old) completed separate implicit and explicit associative memory tests. For the implicit task, children incidentally encoded pairs of objects by making an object categorization decision. At test, they completed the same task, but unbeknownst to the participants, the pairs were either intact, rearranged, or new. Next, children completed another incidental encoding phase, then an explicit test in which they indicated whether the pairs were intact, rearranged, or new. For the implicit test, all age groups had faster reaction times for intact than rearranged pairs (indicative of implicit associative memory). In the explicit test, memory performance (d’) improved with age. A separate measure of attention related to performance in both the explicit and implicit tasks. Together, these results support that attentional mechanisms are responsible for age-related improvements in associative memory.

Encoding-related Brain Activity Predicts Subsequent Trial-level Control of Proactive Interference in Working Memory Proactive interference (PI) appears when familiar information interferes with newly acquired information and is a major cause of forgetting in working memory. It has been proposed that encoding of item–context associations might help mitigate familiarity-based PI. Here, we investigate whether encoding-related brain activation could predict subsequent level of PI at retrieval using trial-specific parametric modulation. Participants were scanned with event-related fMRI while performing a 2-back working memory task with embedded 3-back lures designed to induce PI. We found that the ability to control interference in working memory was modulated by level of activation in the left inferior frontal gyrus, left hippocampus, and bilateral caudate nucleus during encoding. These results provide insight to the processes underlying control of PI in working memory and suggest that encoding of temporal context details support subsequent interference control.

Recurrent neural networks that learn multi-step visual routines with reinforcement learning Many cognitive problems can be decomposed into series of subproblems that are solved sequentially by the brain. When subproblems are solved, relevant intermediate results need to be stored by neurons and propagated to the next subproblem, until the overarching goal has been completed. We will here consider visual tasks, which can be decomposed into sequences of elemental visual operations. Experimental evidence suggests that intermediate results of the elemental operations are stored in working memory as an enhancement of neural activity in the visual cortex. The focus of enhanced activity is then available for subsequent operations to act upon. The main question at stake is how the elemental operations and their sequencing can emerge in neural networks that are trained with only rewards, in a reinforcement learning setting. We here propose a new recurrent neural network architecture that can learn composite visual tasks that require the application of successive elemental operations. Specifically, we selected three tasks for which electrophysiological recordings of monkeys’ visual cortex are available. To train the networks, we used RELEARNN, a biologically plausible four-factor Hebbian learning rule, which is local both in time and space. We report that networks learn elemental operations, such as contour grouping and visual search, and execute sequences of operations, solely based on the characteristics of the visual stimuli and the reward structure of a task. After training was completed, the activity of the units of the neural network elicited by behaviorally relevant image items was stronger than that elicited by irrelevant ones, just as has been observed in the visual cortex of monkeys solving the same tasks. Relevant information that needed to be exchanged between subroutines was maintained as a focus of enhanced activity and passed on to the subsequent subroutines. Our results demonstrate how a biologically plausible learning rule can train a recurrent neural network on multistep visual tasks.

The characteristics of the implicit body model of the trunk Knowing where the body is in space requires reference to a stored model of the size and shape of body parts, termed the body model. This study sought to investigate the characteristics of the implicit body model of the trunk by assessing the position sense of midline and lateral body landmarks. Sixty-nine healthy participants localised midline and lateral body landmarks on their thorax, waist and hips, with perceived positions of these landmarks compared to actual positions. This study demonstrates evidence of a significant distortion of the implicit body model of the trunk, presenting as a squatter trunk, wider at the waist and hips. A significant difference was found between perceived and actual location in the horizontal (x) and vertical (y) directions for the majority of trunk landmarks. Evidence of a rightward bias was noted in the perception of six of the nine body landmarks in the horizontal (x) direction, including all midline levels. In the vertical (y) direction, a substantial inferior bias was evident at the thorax and waist. The implicit body model of the trunk is shown to be distorted, with the lumbar spine (waist-to-hip region) held to be shorter and wider than reality.

created by https://andyadkins.com