Press "Enter" to skip to content

Page 2

Biocomputation: Moving Beyond Turing with Living Cellular Computers It is a well-known story that theoretical computer science and biology have been drawing inspiration from each other for decades. While computer science has tried to mimic the functioning of living systems to develop computing models, including automata, artificial neural networks, and evolutionary algorithms, biology has used computing as a metaphor to explain the functioning of living systems.4 For example, biologists have used Boolean logic to conceptualize gene regulation since early 1970s, when Jacques Monod wrote the inspirational statement “… like the workings of computers.”40

This article contends that information processing is the link between computer science and molecular biology. Information and its processing lie at the core of both fields. In computer science, a model of computation such as finite state machines or Turing machines defines how to generate output from a set of inputs and a set of rules or instructions. Similarly, biological systems (like the bacterial cell in Figure 1A) sense and react to input stimuli to generate a response according to their internal configuration. Using synthetic biology,6 it is now possible to modify the specific nature of each of these steps in biological systems (for example, edit the DNA of living cells to sense new inputs) allowing for the programming of information processing devices with living matter.9 This exciting breakthrough not only provides possibilities for applications that traditional computers cannot reach, but it also challenges the traditional idea of computation and what can be computed. This fascinating concept has the potential to take computer science to new frontiers, paving the way for future advances and discoveries.

Adaptive behavior with stable synapses Behavioral changes in animals and humans, as a consequence of an error or a verbal instruction, can be extremely rapid. Improvement in behavioral performances are usually associated in machine learning and reinforcement learning to synaptic plasticity, and, in general, to changes and optimization of network parameters. However, such rapid changes are not coherent with the timescales of synaptic plasticity, suggesting that the mechanism responsible for that could be a dynamical network reconfiguration. In the last few years, similar capabilities have been observed in transformers, foundational architecture in the field of machine learning that are widely used in applications such as natural language and image processing. Transformers are capable of in-context learning, the ability to adapt and acquire new information dynamically within the context of the task or environment they are currently engaged in, without the need for significant changes to their underlying parameters. Building upon the notion of something unique within transformers enabling the emergence of this property, we claim that it could also be supported by input segregation and dendritic amplification, features extensively observed in biological networks. We propose an architecture composed of gain-modulated recurrent networks that excels at in-context learning, showing abilities inaccessible to standard networks. We argue that such a framework can describe the psychometry of context-dependent tasks on humans and other species, solving the incoherence of plasticity timescales. When the context is changed, the network is dynamically reconfigured, and the predicted output undergoes dynamic updates until it aligns with the information embedded in the context.

Normativity, Autonomy, and Agency: A Critical Review of Three Essayson Agency in Nature, and a Modest Proposal for the Road Ahead Has the renewal of interest in the ostensible agency of living beings signaled an advance from a merely heuristic Kantiansense of purposiveness to an unequivocally, empirically grounded research program or are there as yet hidden tensionsor contradictions in, for example, the organizational autonomy approach to natural agency? Can normativity be foundto be immanent in nature but only beginning with the living cell or must a thoroughgoing naturalism find the seeds ofnormativity immanent throughout abiotic as well as biotic nature? Beginning with a brief exposition of Kant´s influentialtreatment and recommendation for how to methodologically combine what he took to be the inevitable epistemologicallimit to explaining the origins of ostensible biotic purposefulness with the legitimate intentions of scientific research andexplanation, this essay will critically engage with three recent essays that attempt to grapple with the preceding questions.Having putatively raised questions about the consistency and adequacy of each of the individual positions, the essay willattempt to move synthetically, drawing upon aspects of all three contributions, in the direction of a “cooperativity theo-retic” approach to incipient natural normativity and agency.

Probabilistic causal reasoning under time pressure While causal reasoning is a core facet of our cognitive abilities, its time-course has not received proper attention. As the duration of reasoning might prove crucial in understanding the underlying cognitive processes, we asked participants in two experiments to make probabilistic causal inferences while manipulating time pressure. We found that participants are less accurate under time pressure, a speed-accuracy-tradeoff, and that they respond more conservatively. Surprisingly, two other persistent reasoning errors—Markov violations and failures to explain away—appeared insensitive to time pressure. These observations seem related to confidence: Conservative inferences were associated with low confidence, whereas Markov violations and failures to explain were not. These findings challenge existing theories that predict an association between time pressure and all causal reasoning errors including conservatism. Our findings suggest that these errors should not be attributed to a single cognitive mechanism and emphasize that causal judgements are the result of multiple processes.

Judgment’s aimless heart It’s often thought that when we reason to new judgments in inference, we aim at believing the truth, and that this aim of ours can explain important psychological and normative features of belief. I reject this picture: the structure of aimed activity shows that inference is not guided by a truth-aim. This finding clears the way for a positive understanding of how epistemic goods feature in our doxastic lives. We can indeed make sense of many of our inquisitive and deliberative activities as undertaken in pursuit of such goods; but the evidence-guided inferences in which those activities culminate will require a different theoretical approach.

Semantic minimalism and the continuous nature of polysemy Polysemy has recently emerged as a popular topic in philosophy of language. While much existing research focuses on the relatedness among senses, this article introduces a novel perspective that emphasizes the continuity of sense individuation, sense regularity, and sense productivity. This new perspective has only recently gained traction, largely due to advancements in computational linguistics. It also poses a serious challenge to semantic minimalism, so I present three arguments against minimalism from the continuous perspective that touch on the minimal concept, the distinction from homonymy, and the quasi-rule-like nature of polysemy. Last, I provide an account of polysemy that incorporates this continuous perspective.

Convention The central philosophical task posed by conventions is to analyze what they are and how they differ from mere regularities of action and cognition. Subsidiary questions include: How do conventions arise? How are they sustained? How do we select between alternative conventions? Why should one conform to convention? What social good, if any, do conventions serve? How does convention relate to such notions as rule, norm, custom, practice, institution, and social contract? Apart from its intrinsic interest, convention is important because philosophers frequently invoke it when discussing other topics. A favorite philosophical gambit is to argue that, perhaps despite appearances to the contrary, some phenomenon ultimately results from convention. Notable candidates include: property, government, justice, law, morality, linguistic meaning, necessity, ontology, mathematics, and logic.

PEL 339: Brian Ellis on the Metaphysics of Science (Part Two) Continuing from part one on The Philosophy of Nature: A Guide to the New Essentialism (2002), still with guest Chris Heath.

We get further into the text, covering Ellis’ argument for scientific realism (as opposed to scientists just constructing models as a shorthand for displaying their data without actual metaphysical commitments), his strict criteria for a natural kind, whether water really has to be without admixture to be counted as water, so-called variable natural kinds (which have essential properties even though they’re not all identical like hydrogen atoms), whether Ellis’ picture is actually a descendent of Aristotle, how the world is fundamentally dynamic (with natural kinds having their own internal movement, which is undeniably Aristotelian), the ontology of facts (like Wittgenstein!), predicates vs. properties (is “being a horse” a property?), and how dispositions are not reducible to structure. We’ll get more into that last issue in our next episode when we cover the rest of the book, which further explores dispositions, natural laws, and philosophical implications of Ellis’ essentialism.

created by https://andyadkins.com