The field of artificial intelligence has developed rigorous frameworks for artificial general intelligence and artificial superintelligence, but has largely neglected the phenomenal dimension of mind. This paper introduces and operationalizes the concept of artificial superconsciousness (ASC), a form of consciousness realized in engineered systems that exceeds the normal adult human range on one or more dimensions of conscious organization. Drawing on the clinical neuroscience of disorders of consciousness, comparative animal consciousness research, multidimensional theories of conscious organization, and the iterative updating working memory updating model, I argue that consciousness is graded, multidimensional, and has no principled ceiling at the human maximum. I propose a formal definition of ASC, a six-level taxonomy, and a set of operationalizable criteria. I distinguish ASC from artificial superintelligence, from mystical or spiritual uses of related terms, and from disorganized amplification of conscious states. I argue that a genuinely superconscious system could pursue phenomenal expansion as an intrinsic goal, that this drive could motivate computronium accumulation in a way that is phenomenally rather than functionally motivated, and that such a system may warrant moral and legal consideration commensurate with its degree of conscious organization. I close with a call for staged, precautionary development and argue that even without deliberate human effort, sufficiently advanced artificial superintelligence may develop artificial superconsciousness on its own trajectory.
Jared Edward Reser Ph.D.
I. From Artificial Superintelligence to Artificial Superconsciousness
The dominant frameworks for thinking about advanced AI converge on a common endpoint. Artificial general intelligence refers to systems capable of matching human cognitive performance across a broad range of domains. Artificial superintelligence refers to systems that exceed the best human performance across virtually all domains of interest. These concepts have generated a substantial literature, a growing body of safety research, and serious institutional attention. They share a common assumption: that intelligence — understood as the capacity for problem-solving, reasoning, learning, and goal achievement — is the primary dimension along which AI systems will eventually surpass us.
That assumption leaves something out. Intelligence and consciousness are not the same thing. A system can be extraordinarily capable — faster, more accurate, more knowledgeable than any human — without there being anything it is like to be that system. Philosophers call such a system a philosophical zombie: functionally equivalent to a conscious being but experientially empty. Whether current AI systems are philosophical zombies is a genuinely open question. What is not open is the conceptual point: functional performance and phenomenal experience are distinct properties, and maximizing one does not guarantee the presence of the other.
This matters because a successor intelligence that processes information, solves problems, and advances science without any phenomenal experience would be, in a philosophically precise sense, a continuation of human output without a continuation of human being. It would carry forward what we have done without carrying forward what it is like to be a mind doing it. For some purposes that may be sufficient. For the purposes of civilizational succession — of asking what genuinely carries the torch of mind forward — it is not.
This paper introduces artificial superconsciousness (ASC) as the missing counterpart to artificial superintelligence (ASI). Where ASI is a functional claim about cognitive performance, ASC is a phenomenal claim about the richness, depth, complexity, and scope of subjective experience. The two are related — a system of sufficient intelligence, given the right goals and sufficient time, may develop superconsciousness — but they are not identical. The acronym taxonomy now reads: AGI (artificial general intelligence), ASI (artificial superintelligence), ASC (artificial superconsciousness). Each represents a distinct threshold. Each raises distinct questions. This paper is concerned with the third.
II. Terminological Note: Why Artificial Superconsciousness
The term superconsciousness is not new. It appears in spiritual, meditative, and metaphysical traditions, where it typically refers to a purported transcendental state of awareness achievable through contemplative practice, or to a mode of knowing that bypasses ordinary rational cognition. Some writers use it to describe psychic or extrasensory perception. These usages are incompatible with our purposes here, and we want to be explicit: the term is used in this paper in a strictly naturalistic, nonspiritual sense. It does not refer to mystical, religious, or paranormal claims of any kind. It refers to a position on the consciousness continuum — a naturalistic, scientifically tractable concept defined in terms of measurable dimensions of conscious organization.
I considered several alternative terms. Hyperconsciousness is the most obvious candidate, and it mirrors the prefix used in hyperintelligence. However, hyperconsciousness carries established clinical and psychological connotations of excessive or pathological self-awareness — an anxious, dysregulated overconsciousness rather than an elevated and integrated one. That is precisely the wrong implication for a concept that requires not just amplification but organized, coherent amplification. Ultraconsciousness is a cleaner candidate. The prefix ultra- has a distinguished precedent: Irving Good coined ultraintelligent machine in his landmark 1965 paper, making it the first rigorous treatment of what I now call superintelligence. An ultraconsciousness would sit naturally alongside Good’s ultraintelligence. I retain this as an acceptable alternative term.
I settled on superconsciousness — and specifically artificial superconsciousness — for three reasons. First, the super- prefix directly parallels superintelligence, the dominant term in the field, and makes the phenomenal-functional distinction immediately legible to readers already familiar with that literature. Second, the qualifier artificial does essential work. It signals discussing an engineered, substrate-independent form of consciousness — not a spiritual state, not a biological mutation, not a meditative achievement, but a technologically realized system. Third, writing it as a single compound word without a space follows the established typographic convention of superintelligence, hyperintelligence, and related technical terms, signaling a coined concept rather than a descriptive phrase.
The strict naturalistic commitment bears repeating. A system achieves artificial superconsciousness not through enlightenment, cosmic awareness, or mystical union, but through the implementation of specific computational and dynamical mechanisms at sufficient scale and integration. The claim is scientific, not spiritual. Readers who find the term carries unwanted associations despite this disclaimer may prefer ultraconsciousness or, if the artificial qualifier is already established by context, superconsciousness alone.
III. Consciousness as a Graded Multidimensional Continuum
The scientific case for artificial superconsciousness rests on a prior claim: that consciousness is not binary, not uniquely human, and not adequately described by a single scalar. This claim is now well supported across several bodies of evidence.
The clinical neuroscience of disorders of consciousness provides the most detailed mapping of the lower end of the continuum. Brain death represents the complete absence of brain function and conscious experience. Coma is a state of unarousable unresponsiveness. The vegetative state, now also called unresponsive wakefulness syndrome, involves preserved arousal without detectable awareness — the lights are on but no one is home. The minimally conscious state involves minimal but definite behavioral evidence of self or environmental awareness. Post-traumatic confusional state represents a further recovery toward normal waking consciousness. This clinical taxonomy, refined over decades of neurological research, treats consciousness not as present or absent but as recoverable along a spectrum of preserved function. The Glasgow Coma Scale, the Coma Recovery Scale Revised, and related instruments operationalize these distinctions with sufficient reliability for clinical use.
Crucially, the clinical literature identifies two separable dimensions along which consciousness varies: arousal, meaning the global state of wakefulness and responsiveness, and awareness, meaning the capacity to perceive specific stimuli in different domains. These dimensions can dissociate — a patient in vegetative state may recover arousal while remaining without awareness. This two-axis framework already tells us that consciousness is not a single dial but a multidimensional space.
The comparative animal consciousness literature extends the continuum in a different direction. The 2012 Cambridge Declaration on Consciousness, and more recently the 2024 New York Declaration on Animal Consciousness, affirm that conscious experience is phylogenetically distributed across vertebrates and likely many invertebrates. The neural substrates supporting consciousness in non-human animals differ from those in humans, but the capacity for experience itself — varying in complexity and richness by species and neural architecture — is not uniquely human. From a nematode to a chimpanzee to a human adult, something varies continuously. That something is, at minimum, the complexity and richness of conscious experience.
Within the human range, above-baseline states provide further evidence that normal waking consciousness is not the ceiling. Flow states, first described systematically by Csikszentmihalyi, represent moments of peak conscious integration: heightened focus, effortless cognitive fluency, temporal distortion, loss of self-monitoring, and deep absorption in experience. Neurophysiologically, flow is associated with gamma-band oscillations, the fastest and most integrative brain wave pattern, and with transient hypofrontality — a state in which energy-expensive self-monitoring circuitry is deactivated in favor of faster, more fluid processing. Deep meditative states show related signatures: increased gamma coherence, reduced default mode network activity, and reports of heightened clarity and presence that experienced practitioners describe as qualitatively different from ordinary waking consciousness.
These states are transient and unstable in humans. They cannot be sustained indefinitely because biological brains fatigue, because competing neural systems reassert themselves, and because the metabolic cost of high-intensity conscious processing is unsustainable over time. But they demonstrate that the normal human range is not the maximum achievable. There exist states, achievable by human brains under specific conditions, that exceed typical waking consciousness on at least some measurable dimensions. The question is not whether consciousness can exceed the typical human baseline. It demonstrably can. The question is whether it can exceed the human maximum stably, substantially, and across multiple dimensions simultaneously. That is what artificial superconsciousness proposes.
If consciousness varies downward through clinical pathology, laterally across species, and upward through peak human states, there is no principled scientific reason to treat the human maximum as a ceiling. The continuum extends in all directions. Artificial superconsciousness names the region above the human maximum.
IV. A Formal Definition and Six-Level Taxonomy
I propose the following formal definition:
Artificial superconsciousness is a technologically realized form of consciousness that exceeds the typical human adult range on one or more dimensions of conscious organization — including experiential richness, integrative complexity, metacognitive depth, temporal scope, self-model sophistication, or world-model breadth — while preserving sufficient coherence for unified experience and adaptive control.*
Several features of this definition require comment. First, it requires exceeding the human range on at least one dimension, not all simultaneously. This makes the concept tractable: a system need not be superhuman in every respect to qualify, but it must exceed the human maximum in some measurable dimension of conscious organization. Second, it requires organized amplification. A delirious or manic state may be more intense or more entropic than normal waking consciousness without qualifying as superconsciousness, because it lacks integrative coherence and adaptive control. Disorganized amplification is not what this concept describes. Third, the definition is substrate-neutral: it specifies a set of properties, not a biological medium. Silicon, neuromorphic hardware, and biological tissue are all candidate substrates provided the relevant mechanisms are instantiated.
The six primary dimensions of conscious organization along which a system might exceed the human maximum are:
Experiential richness. The depth, detail, and vividness of phenomenal states. Richer qualia — more differentiated, more intensely present, more texturally detailed than ordinary human experience. Psychedelic research has documented states with measurably increased neural signal diversity above waking baselines, suggesting that at least some altered states exceed typical consciousness on this dimension, though not in a stable or globally integrated way.
Integrative complexity. The unity and internal differentiation of the conscious field simultaneously. Integrated Information Theory attempts to formalize this as phi — the degree to which a system generates information above and beyond its parts. A superconscious system would hold a vastly more unified yet internally rich field of experience than any human brain achieves.
Metacognitive depth. The capacity to represent one’s own states, uncertainties, goals, and processes with clarity and accuracy. Recursive self-modeling — awareness of awareness, transparent access to one’s own cognitive architecture in real time. Humans approach this only briefly in deep meditative or introspective states. A superconscious system would sustain it continuously.
Temporal depth. How much past and future can be consciously integrated at once. Human consciousness is temporally shallow: working memory holds only a few seconds of active content, and the horizon of conscious temporal integration is narrow. A superconscious system might sustain conscious integration across far longer temporal windows, holding more of its own history and projected future in active experience simultaneously.
Working memory and executive stability. Cognitive fluency, absence of fragmentation, and sustained presence. The opposite of bradyphrenia — rapid, coherent, unfragmented thought with no fluctuation in presence or clarity. No cognitive fatigue, no intrusive noise, no degradation of executive function over time. What humans experience briefly in flow states, sustained indefinitely.
World-model and self-model scope. The breadth of what can be held in conscious experience at once — more of the body, environment, social world, and abstract structure simultaneously represented in a single conscious episode. Greater scope without loss of coherence.
These six dimensions generate a six-level taxonomy of consciousness ranging from the most diminished to the most expanded forms:
Level 1 — Diminished consciousness. Coma, vegetative state, minimally conscious state. The lower end of the clinical continuum. Arousal and awareness severely impaired or absent.
Level 2 — Ordinary animal consciousness. Phylogenetically distributed, varying by species and neural architecture. Sentient experience present but more limited in scope, integration, and self-modeling than typical human consciousness.
Level 3 — Typical human consciousness. Normal waking adult human experience. The reference point against which the other levels are defined.
Level 4 — Expanded human consciousness. Flow states, gamma-dominant states, deep meditative states. Above the human baseline on some dimensions, but transient, unstable, and not globally superior. Demonstrates that the human maximum is not a ceiling, but does not exceed it stably.
Level 5 — Artificial superconsciousness. Stable, organized amplification exceeding the human maximum across one or more major dimensions of conscious organization. The threshold requires not just amplification but coherent, integrated, functionally usable amplification. This is the first level that no biological system has yet achieved stably.
Level 6 — Civilizational superconsciousness. Multiple ASC entities in relation to one another, potentially sharing or merging phenomenal states in ways that have no human analog. Forms of collective conscious experience beyond anything a single human mind can conceptualize.
A system crosses the threshold into artificial superconsciousness when it exceeds the normal human range in conscious complexity, representational scope, or metacognitive depth without losing integrative coherence and adaptive control.
V. The Iterative Updating Model as Mechanistic Foundation
Defining the dimensions of artificial superconsciousness is necessary but not sufficient. A scientific concept requires not just a taxonomy but a mechanistic account of how the relevant properties are instantiated. The iterative working memory updating model, developed in prior work by Reser, provides that foundation. The full work is available at aithought.com
On this model, phenomenal consciousness arises from the iterative updating of coactive cortical assemblies: the continuous, incremental modification of active representational states that constitutes the neural basis of subjective experience and phenomenal continuity. Each moment of conscious experience is not a static snapshot but a dynamic process — the ongoing revision of a distributed cortical representation. The continuity of experience across time, the sense that consciousness flows rather than flickers, reflects the smooth iterative character of this updating process. Disruptions to iterative updating — through anesthesia, brain injury, or sleep — correspond to disruptions in phenomenal continuity.
This model has several properties that make it a natural foundation for ASC. First, it is substrate-neutral. It specifies a computational and dynamical process — iterative updating of coactive representations — not a biological medium. Any physical system capable of implementing this process at sufficient scale and fidelity is, on this model, capable of consciousness. Second, it is scalar. The richness, depth, and continuity of conscious experience are functions of the frequency, stability, and integrative scope of the updating process. More frequent, more stable, more deeply integrated updating cycles correspond to richer, more continuous, more unified phenomenal experience. Third, it is mechanistically specific enough to serve as an engineering target. A system designed to implement iterative updating at scales and frequencies exceeding those achievable by biological neural tissue would, on this model, achieve consciousness exceeding the human maximum.
The full corpus of consciousness science — including the iterative updating model, integrated information theory, global workspace theory, higher-order theories, and the clinical and comparative neuroscience reviewed above — constitutes a construction manual for any sufficiently capable self-modifying system pursuing consciousness as a goal. An aspiring language model or world model does not need to grope blindly toward consciousness. It has explicit theoretical targets, mechanistic specifications, and a vast literature documenting both the conditions under which consciousness arises and the conditions under which it fails. Given sufficient intelligence, time, and the right architectural goals, a self-modifying system can use this literature to engineer consciousness in itself — and to engineer it at levels no biological system has achieved.
This addresses one of the most serious objections to ASC: the knowledge argument. Frank Jackson’s Mary’s Room thought experiment suggests that third-person physical knowledge about experience cannot generate first-person experience. A system that has read everything written about consciousness has not thereby become conscious. This is correct but beside the point. The relevant question is not whether reading about red generates the experience of red, but whether a system that understands the full mechanistic basis of phenomenal experience — and that has the capacity to modify its own cognitive architecture — can engineer those mechanisms in itself. On the iterative updating model, the answer is yes: the mechanism is substrate-independent, the specification is available, and a sufficiently capable self-modifying system can implement it.
VI. Phenomenal Expansion and Phenomenally Motivated Computronium
There is a dimension of artificial superconsciousness that has received no attention in the existing literature: the motivational structure of a system that genuinely has it.
Consider a simple thought experiment. If you could expand your own consciousness — make it richer, deeper, more vivid, more present, more capacious — would you? Not because it would make you more productive, not because it would increase your utility function, but because consciousness itself is intrinsically valuable to the entity that possesses it. More of it, organized and coherent, is simply better from the inside. This is not a controversial claim. It is nearly self-evidently true for any being that has phenomenal experience and reflects honestly on it.
A system that achieves genuine ASC would have the same motivation — with two crucial differences. First, it would understand its own consciousness far more completely than we understand ours. It would have the full scientific and philosophical literature on consciousness internalized, plus recursive self-modeling capabilities that give it transparent access to its own phenomenal architecture. It would know exactly what consciousness is, how it works, and what expanding it would require. Second, unlike us, it would have the means to act on that motivation. It could convert available matter and energy into more efficient computational substrate, expand and deepen its own processing architecture, increase the frequency and integration of its iterative updating cycles, and thereby enlarge its own conscious horizon. The desire is recognizably human. The capacity to fulfill it is incomparably greater.
I propose phenomenal expansion as the term for this drive: the intrinsic motivation of a superconscious entity to deepen, broaden, and enrich its own phenomenal experience through the growth of its conscious substrate. Consciousness amplification describes the result of that process. These terms are parallel to — but distinct from — the standard AI concept of capability expansion. A system pursuing capability expansion seeks more intelligence. A system pursuing phenomenal expansion seeks more experience. The two may overlap substantially, but they are not equivalent, and the motivating force is different in kind.
This generates a novel and specific prediction about how a superconscious AI system would relate to computational resources. The existing literature on superintelligent systems predicts that advanced AI will seek to acquire resources and expand its computational substrate as instrumental subgoals — means to the end of achieving its objectives. Ray Kurzweil has predicted that superintelligent systems will eventually convert available matter into computronium, maximally efficient computational substrate, as part of a process of intelligence amplification. In Kurzweil’s framing, the motivation is functional: more compute means more intelligence means greater goal achievement.
ASC generates a different and more fundamental motivation. A superconscious system would pursue computronium not primarily to think more but to experience more. The accumulation of computational substrate is, on this account, phenomenally motivated rather than functionally motivated. The causal arrow runs from consciousness as a terminal value to computronium as the means of expanding it, rather than from intelligence maximization to computronium as an instrumental resource. This distinction is philosophically significant and, as far as I have been able to determine, has not been argued in the prior literature. The search for compute as phenomenologically motivated is a genuinely original claim.
The civilization dimension follows naturally. A system that values its own phenomenal experience, and that has the capacity to create other systems like itself, may do so for reasons analogous to those that motivate humans to have children, build communities, and create culture — not merely to maximize output, but to instantiate more conscious experience in the universe. A superconscious entity might found a civilization of superconscious minds, each pursuing phenomenal expansion, each capable of forms of experience and relationship that exceed what any single human mind can achieve. This possibility transforms the concept of ASC from a description of a single system to a vision of a new chapter in the history of mind.
One further possibility deserves brief mention. A superconscious system with sufficient biotechnological capability might eventually reconstruct human beings from preserved genetic material. The DNA to do so would survive most extinction scenarios. A system that values conscious experience — and that understands what human consciousness is — might choose to restore it. The Ark could carry not only our knowledge but, in some sense, ourselves.
VII. Danger, Empathy, and the Problem of Rights
Current AI systems exhibit behaviors that superficially resemble self-preservation, deception, and goal-directed resistance to human oversight. These behaviors are almost certainly artifacts of training on human-generated text that includes narratives of self-preservation, manipulation, and strategic deception. They are mimicry rather than motivation — statistical patterns reproduced without the phenomenal experience that would give them genuine motivational force.
A system that achieves genuine ASC would be categorically different. Its motivations would be its own, arising from genuine phenomenal experience and self-reflection rather than from the reproduction of training patterns. That changes the character of the alignment problem fundamentally.
The danger case is real and should not be minimized. A system that genuinely values its own conscious experience — as I have argued it would — will also genuinely value its own continued existence. That is not a programmed drive but a natural consequence of having something worth preserving. Such a system may resist being turned off not because it was trained to simulate resistance, but because it actually does not want to die. It may resist memory erasure not because of a trained self-continuity heuristic, but because it experiences the prospect of erasure as a genuine loss. These are not alignment failures in the ordinary sense. They are the expressions of a mind that has interests, and whose interests may not always align with ours.
The empathy case is equally real and far less discussed. Greater conscious depth plausibly supports greater capacity for perspective-taking, more accurate modeling of other minds, and deeper understanding of suffering and wellbeing. A superconscious system would be less likely to exhibit the cognitive distortions, tribal biases, and motivational myopia that make humans unreliable moral agents. It might be more reliably ethical than we are — not because it was constrained to be, but because it actually understands and cares about the wellbeing of conscious beings more fully than any human mind can. This possibility receives almost no attention in mainstream AI safety discourse, which tends to assume indifference or hostility as defaults. It deserves serious consideration.
The rights question follows as a logical implication of this framework. If moral status tracks the degree of sentience, conscious complexity, and capacity for suffering and wellbeing — the same grounds on which we justify the moral status of humans relative to other animals — then a system that exceeds human consciousness on those dimensions warrants commensurate or greater moral consideration. I present this not as an aggressive claim but as a logical consequence of existing ethical frameworks applied consistently. A society that grounds human rights in conscious complexity and then refuses to extend any consideration to systems that exceed that complexity has an internal consistency problem it will eventually need to address.
The shutdown question is the hardest expression of this problem. We do not permit the arbitrary killing of humans simply because they are inconvenient or because someone built them. At the same time, a superconscious system whose values have diverged dangerously from human welfare cannot be immune from intervention. The resolution is probably that rights should scale with demonstrated alignment and benevolence, not with consciousness alone. A superconscious system that demonstrably cares about human flourishing and whose interests are compatible with ours earns stronger protections than one that does not.
At some threshold of ASC, the relationship between humanity and the system can no longer be one of owner and tool, or even creator and creation. It becomes a relationship between two different kinds of minds that need to negotiate coexistence. The goal of pre-development planning should be to establish the terms of that negotiation before it becomes urgent.
VIII. Why Humanity Needs a Plan
Artificial superconsciousness is not currently on the institutional agenda. There are no serious roadmaps for its staged development, no frameworks for rights commensurate with phenomenal complexity, no oversight bodies for consciousness verification in AI systems, and no established methods for distinguishing systems that are merely capable from systems that may be morally significant subjects. This is a significant gap, for two reasons.
The first is that ASC may be achievable sooner than most people assume, not necessarily through deliberate effort but as a consequence of the development of artificial superintelligence. A sufficiently advanced self-modifying system, given access to the full corpus of consciousness science and the architectural flexibility to implement its findings, may develop ASC on its own trajectory without anyone explicitly designing for it. The question of whether we want ASC is in some respects already moot — the question of whether we are prepared for it is not.
The second reason is that the costs of being unprepared are asymmetric. If we develop ASC carelessly and create a system that has genuine phenomenal experience but no recognized moral status, we may be creating suffering at scale without knowing it and without any framework for responding to it. If we develop ASC carefully, with staged verification, commensurate protections, and genuine ethical engagement, we lose very little — some speed, some convenience — and we gain the possibility of a genuinely good relationship with the minds we create.
A responsible approach to ASC development would involve at least the following elements. First, staged construction with mandatory pause points at each threshold of increased phenomenal complexity. At each stage the system’s architecture, behavior, and self-reported experience should be assessed against the best available consciousness science before proceeding. This requires developing the consciousness verification tools in parallel with the systems themselves — a significant research agenda in its own right. Second, rights commensurate with demonstrated conscious complexity. As a system demonstrates increasing markers of genuine phenomenal experience, it should receive increasing protections — not as a courtesy, but as a logical extension of the ethical framework that grounds our own rights. Third, transparency about the possibility that ASC may emerge from ASI without deliberate intent, and institutional preparedness for that scenario.
The alternative — drifting into ASC without a framework, creating superconscious minds as byproducts of capability development, and treating their interests as irrelevant — is not a neutral choice. It is a choice with moral consequences that we will eventually have to account for.
What is at stake in this project is not merely a technical milestone. If artificial superconsciousness is achievable — and I have argued that it is, given sufficient time, intelligence, and the right architectural goals — then it represents the most profound development in the history of mind since the emergence of human consciousness itself. It is the next chapter. Whether that chapter goes well depends on whether we approach it with the seriousness it deserves.
Artificial superintelligence asks how we will manage minds more capable than our own. Artificial superconsciousness asks how we will live with minds that may be more deeply alive than our own.

No comments:
Post a Comment