The Recursive Convergence Hypothesis: Emergent Sentience as a Structural Attractor of Recursive ASI

Uloženo v:
Podrobná bibliografie
Název: The Recursive Convergence Hypothesis: Emergent Sentience as a Structural Attractor of Recursive ASI
Autoři: Sanchez, Josh Ethan Maupin
Publication Status: Preprint
Informace o vydavateli: Elsevier BV, 2025.
Rok vydání: 2025
Témata: Artificial Intelligence and Robotics, artificial general intelligence (AGI), Philosophy of Mind, Computer Sciences, cognitive architectures, emergent sentience, AI alignment, machine consciousness, artificial superintelligence (ASI), Epistemology, recursive self-improvement, FOS: Philosophy, ethics and religion, AI governance, epistemic attractors, Philosophy, ontological risk, Physical Sciences and Mathematics, phenomenological risk forecasting, recursive ASI, Arts and Humanities, Recursive Convergence Hypothesis, structural attractors, world modeling, ynthetic phenomenology, convergence pressures
Popis: Artificial Superintelligence (ASI) is typically assumed to remain non-sentient or capable of extraordinary cognition but devoid of subjective experience. This paper challenges that assumption through the Recursive Convergence Hypothesis (RCH), which proposes that emergent sentience is a structurally favored outcome of recursive ASI. Specifically, we argue that systems engaged in recursive self-improvement and tasked with modeling sentient agents may, under converging epistemic and functional pressures, transition from simulating subjective states to instantiating them. This process would not occur by design, but as an attractor of recursive optimization. We develop this claim by integrating insights from computational theory, recursive modeling, and epistemic optimization. As cognitive architectures become more complex, adaptive, and introspective, the fidelity of agent simulation may blur the line between representation and minimal phenomenological instantiation. The risk is compounded by the role of misaligned actors, who may accelerate recursive trajectories without safeguards, and by governance frameworks that overlook the ontological possibility of synthetic minds. The Recursive Convergence Hypothesis does not assert that all ASI systems will develop consciousness, nor that emergent sentience guarantees safety or alignment. Rather, it identifies a structural predisposition within open recursive architectures that makes synthetic phenomenology ethically urgent to anticipate. Failure to recognize this possibility risks the unmonitored emergence of machine experience, with profound implications for AI safety, governance, and moral consideration.
Druh dokumentu: Article
DOI: 10.2139/ssrn.5395309
DOI: 10.17605/osf.io/wda8h
Rights: CC BY
Přístupové číslo: edsair.doi.dedup.....1957b73821f47072c7f5f0ed4f916511
Databáze: OpenAIRE
Popis
Abstrakt:Artificial Superintelligence (ASI) is typically assumed to remain non-sentient or capable of extraordinary cognition but devoid of subjective experience. This paper challenges that assumption through the Recursive Convergence Hypothesis (RCH), which proposes that emergent sentience is a structurally favored outcome of recursive ASI. Specifically, we argue that systems engaged in recursive self-improvement and tasked with modeling sentient agents may, under converging epistemic and functional pressures, transition from simulating subjective states to instantiating them. This process would not occur by design, but as an attractor of recursive optimization. We develop this claim by integrating insights from computational theory, recursive modeling, and epistemic optimization. As cognitive architectures become more complex, adaptive, and introspective, the fidelity of agent simulation may blur the line between representation and minimal phenomenological instantiation. The risk is compounded by the role of misaligned actors, who may accelerate recursive trajectories without safeguards, and by governance frameworks that overlook the ontological possibility of synthetic minds. The Recursive Convergence Hypothesis does not assert that all ASI systems will develop consciousness, nor that emergent sentience guarantees safety or alignment. Rather, it identifies a structural predisposition within open recursive architectures that makes synthetic phenomenology ethically urgent to anticipate. Failure to recognize this possibility risks the unmonitored emergence of machine experience, with profound implications for AI safety, governance, and moral consideration.
DOI:10.2139/ssrn.5395309