Abstract
Whole-brain emulation is typically treated as a problem of reproducing the brain’s structure, connectivity, and function at sufficient resolution for mind to emerge. I argue that this framing is likely to create a brain from the bottom up in an incomplete way so that mind and consciousness may not emerge. That is why I recommend using my model of working memory (The Iterative Updating model) as a scaffold to guide efforts at simulating brains. A mind is not merely a physical arrangement or a succession of computational states, but a temporally self-renewing process in which each mental state inherits from and transforms the one before it. This article proposes that whole-brain emulation must reproduce this iterative mental continuity if it is to recreate a genuine stream of thought. Drawing on the concepts of iterative updating, state-spanning coactivity, and the specious present, I argue that partial overlap among successive active states may be necessary for conscious continuity, cognitive coherence, sustained goal pursuit, and the incremental construction of complex thought. I introduce the distinction between snapshot fidelity and stream fidelity, and suggest that an emulator may match neural states, circuitry, or behavior at many moments while still failing to reproduce the temporally extended process that constitutes a mind. Iterative mental continuity is presented not only as a theoretical account of mentality, but as a practical guide for emulation research. It can function as a north star for identifying which dynamical properties must be preserved if mentality is to emerge. I also propose a comparative research strategy in which brain emulations built with and without explicit preservation of iterative continuity are evaluated against one another. If correct, this framework has implications not only for whole-brain emulation, but also for mind uploading, personal identity, and the design of future artificial minds.
This article builds on a body of theoretical work developed by the author over more than two decades. The foundational peer-reviewed paper, published in Physiology and Behavior in 2016, introduced the core constructs of state-spanning coactivity (SSC) and incremental change in state-spanning coactivity (icSSC), and can be accessed at:
1. Introduction
Whole-brain emulation is often conceived as a problem of sufficient detail. If enough of the brain’s anatomy, connectivity, cell types, synaptic strengths, firing patterns, and biochemical properties can be measured and reproduced, then the resulting simulation is expected to instantiate the same emergent mental properties as the biological original. On this view, mind is presumed to arise automatically once structural and functional fidelity pass some critical threshold. The central challenge is therefore framed as one of scale, resolution, and computational power.
This article argues that such a framing may be incomplete. A brain is not merely a highly complex object, nor is a mind merely a sequence of accurately reproduced neural states. Mentality may depend not only on what is instantiated at a given moment, but on how successive moments inherit from, modify, and constrain one another over time. Conscious thought appears to unfold as a temporally extended stream in which some active representations persist across adjacent states while others are added, suppressed, or transformed. A successful emulation may therefore require more than structural accuracy and state-by-state realism. It may require reproducing the iterative mental continuity by which one mental state gives rise to the next.
The distinction is important because a great many scientific and engineering efforts succeed at reproducing components without reproducing the process that makes the original system what it is. A flame, for example, is not simply wax, wick, heat, and oxygen assembled in the correct arrangement. It is a self-renewing process in which fuel is continually drawn upward, vaporized, ignited, and replenished. If that ongoing cycle is not preserved, the flame does not truly exist, no matter how faithfully its ingredients have been catalogued. A mind may be similar. It may be less like a static structure and more like a process that must continuously recreate itself through partially overlapping successive states. Whole-brain emulation may fail if it reproduces the ingredients of mind while omitting the temporally self-renewing dynamics that allow a stream of thought to emerge.
This concern becomes sharper when one considers the possibility that mental continuity is not merely a phenomenological ornament but part of the computational core of cognition. The progressive buildup of ideas, the maintenance of goals across time, the coherence of narrative thought, the transformation of mental imagery, and the capacity for multistep reasoning may all depend on the partial preservation of active content from one moment to the next. If so, then whole-brain emulation is not simply tasked with recreating isolated brain states or even behavioral outputs. It must reproduce the lawful temporal transitions that bind successive states into a living cognitive stream.
Figures from aithought.com
The present article develops this argument using the concepts of iterative updating, the specious present, and state-spanning coactivity. These ideas converge on a common claim: mental life depends on ongoing overlap among successive active states. Some content must remain active long enough to serve as a referential and computational bridge to newly recruited content. Too little overlap would threaten coherence, context, and continuity. Too much overlap would threaten flexibility, novelty, and cognitive progression. Conscious thought may therefore depend on a balance between persistence and change, one that allows the mind to remain both stable and fluid across time.
From this perspective, the central challenge for whole-brain emulation is not merely to reproduce neural structure, but to reproduce the temporal mode of organization by which a brain becomes a mind. This suggests a distinction between two different kinds of fidelity. Snapshot fidelity concerns the accurate reproduction of parts, circuits, or instantaneous neural states. Stream fidelity concerns the accurate reproduction of the temporally extended process by which mental states unfold, overlap, and transform. An emulator might achieve a high degree of snapshot fidelity while still failing to instantiate the continuous mental stream characteristic of conscious cognition. Put differently, it may be possible to simulate much of the brain without yet emulating the mind in the fullest sense.
The argument advanced here is not that bottom-up detail is unimportant. Structural and mechanistic fidelity may be indispensable. Rather, the claim is that such fidelity may need theoretical guidance. In complex systems, exhaustive measurement alone does not always reveal which dynamical relations are essential for emergence. Iterative mental continuity may serve as a useful guidepost or north star, helping identify which temporal properties are most important to preserve. This, in turn, suggests a concrete research program: brain emulations constructed with and without explicit preservation of iterative continuity could be compared for differences in coherence, multistep cognition, self-report, and the stability of ongoing mental organization.
The broader stakes are considerable. If iterative mental continuity is required for genuine mind emulation, then the implications extend beyond whole-brain simulation to questions of mind uploading, personal identity, paused and resumed emulations, and the design of future artificial minds. The issue is not only whether enough information has been copied, but whether the copied system reproduces the temporally self-renewing stream that constitutes a mind. Whole-brain emulation, on this view, will succeed not when it merely reconstructs the brain’s static organization, but when it recreates the ongoing continuity of thought itself.
2. Minds Are Not Snapshots: The Case for Iterative Mental Continuity
A central assumption underlying many discussions of whole-brain emulation is that mentality can be recovered if enough neural detail is captured at sufficiently fine resolution. This assumption often treats the brain as though its essential nature lies in its parts, their organization, and their instantaneous states. On such a view, if one could reproduce the relevant structural and functional configuration at each moment, the mind should emerge as a matter of course. Yet this way of framing the problem risks overlooking an important feature of mental life: the mind does not appear to exist as a series of isolated snapshots. It unfolds as a continuous stream in which each moment is shaped by the residual activity, context, and momentum of the moments immediately preceding it.
The notion of iterative mental continuity begins from this observation. At any given time, some subset of mental contents remains active long enough to influence the next phase of processing. These contents may include currently attended perceptions, elements of working memory, affective tones, intentions, goals, imagery, and fragments of self-modeling or narrative interpretation. As new inputs arrive and new associations are recruited, some of these active elements are preserved, some are modified, and others are replaced. The result is not a sequence of independent mental tableaux, but a rolling process of partial inheritance. Each state both depends on and transforms the one before it.
This partial carryover may be essential to what is ordinarily experienced as the stream of consciousness. A conscious moment typically does not feel like an isolated frame severed from its neighbors. Rather, it feels embedded within a moving window of retained context. One hears a sentence as a coherent unfolding utterance because earlier words remain functionally available as later words arrive. One follows a melody because prior notes continue to shape the significance of the current one. One pursues a line of thought because the problem, the intermediate steps, and the intended destination remain partially active across time. In each case, present awareness depends on the lingering influence of what has just occurred.
This is one reason the concept of the specious present is so relevant here. The experienced present is not a mathematical instant. It is a short temporal span within which successive contents are gathered into a single, unified field of awareness. Iterative mental continuity provides a plausible mechanism for how such a field might be realized. If a portion of the active mental set persists across adjacent update cycles, then the mind can integrate what has just happened with what is happening now, and can do so in a way that preserves both continuity and change. Consciousness, on this view, is not built from isolated temporal atoms. It is built from overlapping episodes of active co-presence.
This perspective also helps clarify the role of state-spanning coactivity. If some neural coalitions remain coactive across successive moments, they can serve as anchors or referents for newly recruited content. The current state is then not merely replaced by a new state, but folded into it. This enables the mind to preserve themes, maintain situational interpretation, and progressively elaborate thoughts rather than repeatedly starting from scratch. The system carries forward enough of itself to remain the same stream, while changing enough to continue moving. That balance between persistence and transformation may be one of the defining properties of mindedness.
Seen in this light, a purely snapshot-based conception of emulation becomes problematic. Even if an emulator could perfectly reproduce a vast number of isolated neural states, it might still fail to reproduce the temporally extended relations that bind those states into a unified process. A series of impeccably rendered frames is not yet a flow. What matters is not only what appears at each moment, but how each moment inherits from the last and prepares the next. The mind may therefore be better understood as a structured process of recursive updating than as a list of momentary configurations.
This claim does not imply that instantaneous states are unimportant. At every moment there must be some concrete neural realization. But the significance of any such state may depend on its place within an unfolding temporal sequence. A pattern that means one thing when preceded by a certain context may mean something very different when encountered in isolation or after a different sequence of prior states. The content of consciousness is therefore not fully determined by what is active now alone. It is partly determined by the recent path through which the current state was reached. Mental states may be path-dependent in a deep sense.
For this reason, iterative mental continuity should not be treated as a secondary embellishment added on top of a fundamentally snapshot-like architecture. It may be one of the basic organizational principles that allows mental life to exist at all. Without sufficient overlap among successive states, cognition would risk fragmenting into disconnected episodes. With too much overlap, it would risk becoming static and inflexible. A functioning mind may require a controlled degree of continuity, enough to preserve context and identity across time, but not so much that novelty and updating become impossible.
Whole-brain emulation must therefore confront a possibility that is easy to underestimate: the essence of a mind may lie not just in the complexity of its states, but in the style of their succession. If that is correct, then recreating a brain will require more than assembling the right parts or reproducing the right states at high resolution. It will require reproducing the temporally extended process by which one mental state grows out of another. Minds are not snapshots. They are streams whose continuity must be actively sustained.
2.1 How Iterative Mental Continuity Could Guide Brain Emulation in Practice
Current brain-emulation efforts have made major progress in structural reconstruction, multimodal recording, and executable simulation. Dense connectomic programs now combine detailed anatomy with functional measurements, while large simulation platforms aim to reconstruct cortical circuits or whole-organism sensorimotor loops in executable form. These are extraordinary advances, but they do not by themselves specify which dynamic properties are most essential for the emergence of mind. Part of the reason why is there is no organizing model of how the brain works. But that is exactly what I have been trying to develop for over two decades (aithought.com). My framework proposes that one additional target must be made explicit: the temporal continuity profile of active neural coalitions across successive processing cycles. In other words, emulation should aim not only to reproduce structure, physiology, and behavior, but also to preserve the rolling overlap by which one active state becomes the next.
A first way the model can guide emulation is by clarifying what should be measured. Standard emulation pipelines naturally prioritize cell morphology, synaptic connectivity, firing patterns, and stimulus-response relations. My model suggests that these should be supplemented by measurements of temporal inheritance. Researchers should ask which task-relevant assemblies remain active across adjacent processing intervals, how long they persist, how much of the active coalition at one moment remains part of the next, and how retained contents shape the recruitment of new ones. A connectome alone cannot answer these questions. They require multimodal datasets that tie structure to function over time. This is why structure-function efforts are especially important: they begin to expose the temporal bridge between circuitry and ongoing cognition rather than leaving continuity implicit.
A second contribution of the model is that it helps identify the level of organization that may matter most. Whole-brain emulation is often pulled between two extremes. One extreme seeks microscopic completeness, as though every molecular detail must be simulated. The other accepts highly abstract functional approximation. Iterative mental continuity suggests a more focused middle level. The critical target may be the set of mechanisms that allow neural coalitions to persist, reverberate, decay gradually, recruit associates, and partially hand off their contents into the next state. This points toward a meso-scale engineering priority: preserve the circuitry and dynamics that support sustained activation, recurrent interaction, short-term retention, selective recruitment, and controlled replacement. The theory therefore does not merely say “capture more detail.” It says “capture the detail responsible for temporal continuity.”
A third way the model can guide the field is by changing validation criteria. At present, an emulator might be judged by anatomical realism, electrophysiological similarity, stimulus decoding, or behavioral success. Those remain important, but the present framework argues that they are incomplete. A mind-like emulation should also be tested for stream organization. Does it preserve task-relevant information over successive cycles in a way that supports multistep cognition? Does it maintain thematic and interpretive continuity over time? Does it transform internal representations incrementally, rather than repeatedly resetting them? Does current processing depend appropriately on the immediately preceding internal state? These are not cosmetic questions. They concern whether the model sustains a living cognitive stream or merely produces locally plausible outputs. Thus, the relevant benchmark is not only state accuracy, but inheritance accuracy among states.
A fourth contribution is that the model generates experimentally tractable manipulations. If iterative continuity is real and important, then emulations should be sensitive to parameters governing persistence and turnover. One could vary recurrent gain, decay constants, short-term maintenance strength, or gating thresholds for entry into the active set, and then test the consequences for coherence, goal maintenance, multistep reasoning, and internal stability. The theory predicts that there should be a workable middle regime. If persistence is too weak, the system should fragment, losing context and failing to build thoughts across time. If persistence is too strong, the system should become sticky or perseverative, unable to update flexibly. That gives the framework predictive force. It implies not only that continuity matters, but that distortions of continuity should yield characteristic failure modes rather than generic degradation.
A fifth and especially important role for the model is diagnostic. Suppose a future emulation captures extraordinary structural detail yet fails to produce convincing mindedness. Without a theory of continuity, the natural response would be to assume that still more detail is needed. Your framework offers a sharper diagnosis. The problem may not be missing parts so much as a disrupted temporal regime. Conduction delays may be off. Synaptic time constants may be wrong. Recurrent loops may fail to sustain content properly. Sensory-motor closure may be mistimed. Neuromodulatory settings may not support the right balance between persistence and flexibility. In that case, the emulation would not fail because it lacked enough anatomy, but because it failed to recreate the dynamics by which anatomy normally becomes a mind. This is an important conceptual gain. It changes failure analysis from “not enough realism” to “wrong continuity profile.”
The framework also has implications for data-collection priorities. If mind depends on iterative continuity, then brain-emulation projects should prioritize datasets that reveal not only structure, but the evolution of active ensembles across time. This includes sustained assembly activity, transitions among active coalitions during cognition, recurrence patterns underlying working memory, and the relation between currently active content and short-term latent retention. Such data are more demanding than static maps, but they are closer to the real target. A structure-only atlas tells us what could interact. A continuity-oriented dataset tells us how ongoing mentality is actually carried forward. This is one reason multimodal programs are so valuable. They make it possible, at least in principle, to reconstruct not only the substrate of thought but the temporal style of its unfolding.
Most importantly, the model suggests a direct comparison experiment. One class of emulations could be built in a continuity-blind manner, maximizing structural and physiological realism while allowing large-scale mental organization to emerge if it can. Another class could be continuity-guided, using the same biological base but explicitly optimizing for the expected overlap, persistence, and incremental update profile of active states. The resulting systems could then be compared on multistep cognition, coherence over time, goal persistence, context-sensitive interpretation, internal representational smoothness, and, where possible, self-report stability. This would transform iterative mental continuity from a philosophical claim into a comparative research strategy. If continuity-guided systems consistently show stronger stream organization than continuity-blind systems, that would provide evidence that emulation requires more than anatomical and physiological realism alone.
In this way, iterative mental continuity functions not merely as a theory of consciousness, but as an engineering scaffold. It identifies what to measure, what to preserve, how to validate, how to diagnose failure, and how to compare alternative emulation strategies. Existing brain-emulation programs already provide much of the structural, physiological, and computational groundwork. What this framework adds is a more explicit target at the level of temporal organization. The ultimate challenge may not be simply to reconstruct enough of the brain, but to reconstruct the self-renewing continuity through which one active state inherits from and transforms the last. If that process is what makes mentality possible, then preserving it should be one of the central design criteria of whole-brain emulation.
2.2 A Candidate Mechanism: Two-Tier Working Memory and Multiassociative Updating
The present framework does not merely argue that mental continuity is important. It also proposes a candidate mechanism for how that continuity is produced. On the AI Thought account, working memory is organized as a layered system with two forms of persistence. The first is sustained firing, which preserves a highly active foreground of representations on the order of seconds. The second is synaptic potentiation, which preserves a broader short-term context for longer intervals, on the order of minutes to hours. The focus of attention is embedded within this broader short-term store, so that the mind is not composed only of what is maximally active at a given moment, but also of a surrounding penumbra of recently potentiated content that remains poised to influence subsequent processing.
This two-tier arrangement is important because it gives the theory a concrete account of how one mental state remains connected to the next. The attended foreground can maintain a small set of currently prioritized representations through sustained neural firing, while the broader short-term store retains a wider set of recent representations in a less active but still functionally available form. Working memory is therefore not updated by complete replacement. Instead, it is updated partially and continuously. As new representations are added, others are subtracted, and others remain due to persistent activity. The result is that successive states share part of their content, allowing the stream of thought to evolve gradually rather than proceeding through a series of disconnected resets.
This matters for emulation because it specifies what kind of state transition must be recreated. The target is not a sequence of independent neural configurations, but a chain of revised states in which each iteration preserves some proportion of the coactive representations from the prior one. On the AI Thought model, this overlap is not an incidental byproduct. It is the mechanism by which context is conserved, mental frames shift incrementally, and cognition progresses through intermediate states rather than jumping arbitrarily from one representation to another. The article should therefore emphasize that whole-brain emulation must preserve not only momentary activity, but the partial-update rule that links one state to the next.
The model also offers a more specific account of how the next thought is selected. The currently active contents of working memory are not treated as passive items waiting to be replaced. They function as an active search vector. The firing neurons that underlie the current contents spread excitatory and inhibitory effects throughout the network, effectively conducting an associative search through long-term memory for relevant information. On this account, the coactivation of multiple items in working memory jointly shapes which dormant representations receive enough activation to enter the next active state. The next update is therefore selected not by a single cue, but by the combined influence of several coactive representations. This is the core of multiassociative search.
This transition rule is one of the most important features to import into the present article. It explains how continuity actually does computational work. Because several representations are held coactive at once, each can spread activation into the wider network, and the system can select the next representation or coalition on the basis of converging associative pressure. Representations that continue receiving sufficient activation remain in working memory. Representations that lose support drop out. Newly activated representations are added, and the revised set then becomes the basis for the next search. Thought can therefore be modeled as an iterative cycle of maintenance, associative recruitment, competitive selection, and partial renewal.
This gives whole-brain emulation a much clearer mechanistic target. A successful emulator should not only reproduce neurons, synapses, and circuitry at high fidelity. It should reproduce, or at least preserve, the machinery that supports foreground maintenance, broader short-term retention, spreading activation from coactive contents, and the iterative selection of the next active coalition. That means the model can guide emulation at several levels at once. It helps identify what kinds of persistent representations should be measured, what kinds of recurrent and short-term memory dynamics should be preserved, and what sort of state-transition logic should be validated.
The model also suggests that update rate is itself a regulated variable rather than a fixed clock. AI Thought explicitly includes the idea that the rate of iterative updating varies with processing demand. That means a biologically realistic emulator should not assume a uniform cadence of turnover, but should preserve the conditions under which some contexts are updated quickly and others are stabilized for longer periods. This is potentially crucial for understanding how a system balances rapid responsiveness with the prolonged maintenance needed for careful reasoning, problem solving, or the handling of novelty.
Taken together, these features make the model more than a general theory of continuity. They make it a candidate implementation framework. It proposes that mental representations are instantiated by distributed neural activity, that some of these representations are maintained through sustained firing and synaptic potentiation, and that the currently coactive set acts as a multi-cue associative search through memory to determine the next update. This is exactly the kind of process that whole-brain emulation should attempt to model directly or use as a scaffold for model construction. The key challenge is not merely to copy neural matter, but to reproduce the iterative representational economy by which active contents are maintained, jointly constrain associative search, and give rise to the next state of thought.
2.3. How the Present Model Complements Current Brain-Emulation Programs
Current brain-emulation programs are making rapid progress in reconstructing neural structure, recording physiological activity, and building executable simulations of increasingly complex circuits. These efforts are indispensable, but they do not by themselves specify what functional organization must be preserved if a simulation is to become not merely brain-like, but mind-like. The present model is meant to complement, rather than replace, those programs. It offers a theoretical scaffold for identifying which temporal and representational dynamics are most likely to matter for the emergence of a coherent stream of thought.
More specifically, the model proposes that emulation should preserve at least four interrelated features of mental processing. First, distributed neural activity must instantiate meaningful representations. Second, a subset of these representations must be maintained across time through sustained firing and short-term synaptic potentiation. Third, the currently coactive contents must jointly spread activation through associative memory, thereby constraining what new representations are likely to become active next. Fourth, cognition must proceed through iterative updating, such that each state partially inherits from and partially revises the prior one. These claims give emulation research something more precise to aim at than structural realism alone.
In this respect, the present model occupies an intermediate level of analysis between low-level biological reconstruction and high-level behavioral output. It does not deny the importance of cell types, synapses, membrane dynamics, or large-scale connectivity. Nor does it deny the importance of behavioral testing. Instead, it asks how local biological processes give rise to the rolling representational continuity that characterizes real cognition. This meso-level focus may be especially valuable because whole-brain emulation will likely fail if it attempts to leap directly from anatomical completeness to mentality without an account of the intervening organizational principles.
The model also helps clarify what should count as a successful emulation. A simulator might reproduce substantial anatomical detail and even generate plausible outputs while still failing to preserve the right style of internal state transition. What matters is not only whether the emulator contains the right parts, but whether it reproduces a two-tier working-memory regime in which a small attended foreground is sustained within a broader short-term context, and whether these coactive contents jointly constrain the next update through multiassociative search. If those dynamics are absent, the system may simulate neural tissue without simulating the iterative representational economy that underlies a living stream of thought.
This is why the present model is best understood as a guide to prioritization. It can help emulation efforts decide what kinds of measurements are especially valuable, what kinds of mechanisms should be preserved in model reduction, and what kinds of validation criteria should be added to existing benchmarks. In addition to structural and physiological fidelity, researchers should ask whether the emulator preserves partial overlap among successive active states, whether task-relevant information remains available across multiple cycles, whether internal representations evolve incrementally rather than resetting, and whether the system shows the expected balance between continuity and flexibility.
The model therefore complements current brain-emulation programs by adding a missing target. Existing efforts are increasingly capable of telling us what the brain is made of and how some of its parts behave. The present framework speaks to how these parts must be organized across time if they are to yield a coherent mental process. Its contribution is not another layer of biological detail, but an account of the transition rule by which one active state becomes the next. In that sense, it provides whole-brain emulation with something it still lacks: a more explicit theory of the temporal dynamics that may be required for the realization of mind.
3. Why Continuity Matters for Cognition, Not Just Consciousness
Iterative mental continuity matters not only because it may underwrite the felt stream of consciousness, but also because it may be fundamental to cognition itself. A temporally extended overlap among successive active states does more than create experiential unity. It may provide the functional basis for reasoning, planning, interpretation, imagery, and goal-directed behavior. If so, then continuity is not merely what makes mental life feel continuous from the inside. It is also part of what makes thinking work.
Many cognitive operations require the gradual accumulation of partial results across time. A person solving a problem does not usually generate the final answer in a single act. Rather, intermediate representations are retained, evaluated, modified, and combined over successive moments. The same is true of mental arithmetic, sentence comprehension, route planning, causal inference, and reflective self-evaluation. In each case, some elements of the prior mental state must remain active long enough to constrain and inform the next. Without this carryover, cognition would repeatedly lose its place. Thought would be forced to restart at each moment, unable to build anything layered or cumulative.
This is especially clear in multistep reasoning. A chain of reasoning depends on preserving premises, intermediate conclusions, relevant constraints, and the overall direction of inquiry while new operations are performed on them. If these contents vanished too quickly, reasoning would collapse into disconnected fragments. If they remained too rigidly, reasoning would become inflexible and repetitive. Effective thought therefore seems to require a balance between persistence and revision. Iterative mental continuity supplies precisely that balance by allowing selected contents to remain active across multiple cycles while still permitting the active set to evolve.
Goal maintenance provides another example. Human behavior is often organized around purposes that extend beyond the present moment. One may be writing an argument, preparing a meal, searching for a name, or trying to reach a destination. In each case, the current action is guided by something not wholly contained in the immediate sensory present. The relevant goal must remain sufficiently active to shape ongoing processing, yet sufficiently plastic to be updated as obstacles, opportunities, and new information arise. Continuity across successive states is what allows the system to remain organized around a distal objective rather than being captured entirely by the demands of the current instant.
Narrative coherence also depends on this same principle. Human thought is characteristically structured over time. We do not simply register successive events. We interpret them in relation to what has already happened and what we anticipate will happen next. A conversation, a personal memory, or a social interaction becomes intelligible because each moment is embedded within a temporally extended interpretive frame. That frame must be actively preserved. The mind must carry forward elements of context, speaker intention, emotional tone, and situational understanding from moment to moment. Without such continuity, there would be no coherent narrative thread, only isolated episodes lacking thematic integration.
Mental imagery and imagination likewise appear to depend on progressive transformation rather than snapshot replacement. When a person rotates an object in imagination, rehearses a movement, visualizes a future event, or modifies a remembered scene, the imagery typically changes incrementally. One state of the image gives rise to the next through a process of partial preservation and controlled modification. This is difficult to explain if cognition is conceived purely as a sequence of unrelated representational frames. It is easier to explain if some elements of the active configuration remain in place long enough to scaffold the transformation. Iterative continuity allows the image to evolve as the same image, rather than being repeatedly replaced by unrelated substitutes.
The same point applies to attention and context sensitivity. At any given moment, incoming information is interpreted in light of what is already active. A word in a sentence, a facial expression, or a sound in the environment does not arrive into a vacuum. Its meaning depends on the active context provided by immediately preceding states. Iterative continuity allows recently active representations to bias interpretation, shaping what is selected, ignored, amplified, or associated. Cognition is therefore not only reactive to the present input. It is dynamically conditioned by the recent past that remains functionally present within the current state.
These considerations suggest that iterative continuity is not a decorative feature layered on top of cognition, but a condition that makes many forms of cognition possible. The mind’s ability to construct complex meanings, sustain projects, integrate sequences, and elaborate structured representations may depend on the partial overlap of adjacent states. In this respect, consciousness and cognition may share a common temporal foundation. The same continuity that binds experience into a stream may also bind cognition into an organized process.
This has direct implications for whole-brain emulation. If an emulator were to reproduce only the structural substrate and a series of isolated states, it might still fail to preserve the progressive architecture of thought. It could, in principle, generate competent local outputs while lacking the deeper continuity that enables cumulative reasoning, stable purposiveness, and thematic coherence. Such a system might mimic fragments of human performance without reproducing the full dynamic organization of human cognition. The failure would not be merely phenomenological. It would be computational.
For this reason, the problem of temporal continuity should not be reserved for discussions of subjective experience alone. It belongs equally within the theory of cognition. Whole-brain emulation must reproduce not just the contents of mental states, but the temporally extended process by which those contents are preserved, revised, and carried forward into new forms. Continuity is what allows cognition to become more than a succession of disconnected acts. It is what allows a thought to develop.
4. Snapshot Fidelity Versus Stream Fidelity in Brain Emulation
Discussions of whole-brain emulation often assume that the principal challenge is to reproduce the brain with enough detail at enough points in time. This way of thinking places emphasis on what may be called snapshot fidelity: the accuracy with which an emulator captures neural structure, physiological parameters, connectivity patterns, or instantaneous activity states. Snapshot fidelity is clearly important. Any emulation that fails badly at the level of cells, circuits, or active representations will have little claim to realism. Yet snapshot fidelity may not be the deepest standard by which success should be judged. A mind may depend just as much on how states are linked across time as on the content of the states themselves.
This suggests a second and potentially more important criterion: stream fidelity. Stream fidelity concerns whether the emulator reproduces the lawful temporal process by which one mental state inherits from, modifies, and gives rise to the next. It asks not only whether the right neural pattern is present at a given moment, but whether the succession of moments forms the right kind of self-renewing mental stream. An emulation could in principle achieve high snapshot fidelity while still failing at stream fidelity if the transitions between states were mistimed, insufficiently overlapping, excessively abrupt, or otherwise unlike those of a functioning mind.
The distinction can be illustrated by analogy to moving images. A film is not simply a collection of still frames. The illusion of motion depends on how those frames relate to one another over time. If enough frames are missing, if their order is altered, or if temporal spacing is distorted, the result can become jerky, confusing, or unintelligible even if each individual frame is perfectly rendered. A mind may be even more sensitive to this problem than a film, because the brain is not merely presenting a stream but using the stream itself as an active computational medium. The temporal relation between successive states may therefore be constitutive, not merely cosmetic.
From this perspective, an emulator might reproduce an enormous number of correct local features while still failing to instantiate a genuine mind. It might correctly model the connectome, preserve approximate neural firing statistics, and generate plausible outputs, yet still lack the temporally extended coherence required for a unified stream of thought. If the present state does not carry forward the right residue of the previous one, then contextual interpretation, goal persistence, and progressive cognition may degrade. What is missing in such a case would not necessarily be obvious from inspecting isolated states. It would lie in the transition dynamics between them.
This is why stream fidelity may deserve priority over snapshot fidelity when the target is mind rather than tissue simulation alone. A system can contain the right ingredients without reproducing the right process. If consciousness and higher cognition depend on partially overlapping successive states, then the emulator must preserve more than instantaneous realism. It must preserve the right degree and style of inheritance across time. A stream composed of highly accurate but insufficiently connected states may be no more mentally adequate than a sentence composed of grammatically correct but unrelated words.
The distinction also helps clarify why behavioral success may be an incomplete measure of emulation quality. A system might produce plausible answers, actions, or reports while relying on compensatory mechanisms that do not mirror the original mind’s temporal organization. It might behave intelligently in narrow tests while lacking a stable and unified internal stream. In that case, snapshot fidelity and output performance could create the appearance of success, even though stream fidelity remained poor. The emulator would resemble the target in its products more than in its living process.
This problem becomes especially important if whole-brain emulation is intended not merely to simulate general brain-like behavior, but to recreate a particular mind. Personal identity, memory continuity, subjective flow, and the carrying forward of goals and interpretations all seem closely tied to the structure of the stream rather than to isolated neural moments considered one by one. A person is not simply the sum of many discrete brain states. A person is the ongoing continuity that binds those states into a single mental history. Any emulation that neglects this may reproduce fragments of a person without reproducing the person as a stream.
The distinction between snapshot fidelity and stream fidelity therefore reframes the whole-brain emulation project. The central question is no longer only whether enough information has been captured, but whether the captured information has been organized into the right temporally unfolding process. Snapshot fidelity remains necessary, but it may be subordinate to a more fundamental demand: the recreation of a living mental stream whose successive states are joined by the right pattern of overlap, persistence, and transformation. If whole-brain emulation is to recreate a mind rather than merely model a brain, stream fidelity may be the standard that ultimately matters most.
5. Why Bottom-Up Emulation Needs a Theoretical North Star
Whole-brain emulation is often imagined as a predominantly bottom-up enterprise. The guiding intuition is straightforward: if researchers can measure enough of the brain’s microstructure and mechanistic detail with sufficient precision, then the relevant emergent properties should appear when the simulation is run. On this view, theory plays a secondary role. The main problem is one of data acquisition, resolution, and computational scale. Once enough facts are gathered, mind is expected to arise automatically from their faithful reconstruction.
There is good reason to be cautious about this expectation. In complex systems, extensive measurement does not always guarantee successful reproduction of the emergent phenomenon of interest. Researchers may capture a vast proportion of the parts, their arrangement, and many of their interactions while still missing subtle but indispensable dynamical conditions. Small omissions, simplifications, distortions of timing, or failures of precision can prevent the target property from emerging, even when the overall model appears highly realistic. The issue is not that bottom-up detail is unimportant, but that detail alone does not always tell us which features are constitutive and which are merely incidental.
The point can be clarified through the analogy of a flame. A flame is not simply a set of ingredients arranged in space. It involves wax, wick, oxygen, heat gradients, chemical reactions, and local airflow, but it exists only because these ingredients are caught in a specific self-renewing process. Fuel is drawn upward, vaporized, ignited, and continually replenished. If a simulation faithfully catalogs many of the flame’s static properties while failing to preserve the ongoing cycle that regenerates it from moment to moment, then the simulation may depict a flame-like arrangement without instantiating a real flame. The emergent phenomenon depends not merely on the parts, but on the temporally organized process they sustain.
A mind may be similar. The brain is composed of neurons, glia, synapses, neurotransmitters, oscillations, recurrent loops, and many other components. Yet mentality may not arise from these ingredients in any arrangement whatsoever. It may depend on a particular style of temporal self-renewal in which active contents are selectively preserved, transformed, and carried forward across successive states. If this is correct, then a bottom-up reconstruction that omits or misrepresents the dynamics of iterative continuity could fail to produce a genuine mind even while appearing highly faithful in anatomical or physiological terms. It would contain much of the machinery, but not necessarily the process that makes the machinery mentally alive.
This is why whole-brain emulation may need a theoretical north star. A north star does not replace empirical detail, but it helps orient inquiry by identifying which properties are most likely to matter. Theoretical guidance is especially valuable when the system under study is so complex that brute-force realism alone cannot guarantee success. In the present context, iterative mental continuity offers such guidance. It suggests that one of the central properties researchers should attempt to preserve is the partial overlap among successive active states, the controlled inheritance of mental content across time, and the recursive updating process that turns discrete moments into a coherent stream.
The value of such a framework is methodological as well as conceptual. Without a guiding theory, whole-brain emulation risks becoming a kind of open-ended accumulation project in which all measurable details are treated as equally relevant until proven otherwise. But not all details contribute equally to the emergence of mindedness. Some may be peripheral. Others may be indispensable. A theory of iterative continuity helps narrow the search by proposing that the lawful succession of mental states is one of the core properties that cannot be ignored. It thus provides a principled way of asking not only whether enough detail has been captured, but whether the right kind of detail has been captured.
This does not mean that iterative continuity must be imposed in an artificial or top-down manner irrespective of biological facts. Rather, it can function as a heuristic and evaluative framework. Researchers can ask whether their emulations preserve the degree of state overlap, carryover, persistence, and dynamic updating that would be expected of a temporally coherent mind. If not, then the model may need revision even if many low-level measurements are highly accurate. In this way, the theory serves not as a substitute for realism, but as a guide to what kind of realism is likely to matter most.
There is also a deeper philosophical advantage to this approach. The temptation in whole-brain emulation is to assume that the mind is whatever appears once enough structure is copied. But if the relevant emergent property depends on a specific temporal organization, then theory is needed to specify what counts as success. Iterative mental continuity provides a candidate answer. It says that a mind is not just a reproduced brain state or even a reproduced sequence of brain states, but a self-renewing stream in which each state grows out of the previous one in the right way. This helps transform whole-brain emulation from a purely descriptive copying project into a more explanatory science of what minds require.
For these reasons, bottom-up emulation should not be abandoned, but it should be supplemented by a theoretical scaffold capable of identifying the dynamic signatures of mentality. Iterative mental continuity can serve as such a scaffold. It gives researchers a north star by which to judge whether an emulation is merely detailed, or whether it is approaching the deeper target of a genuine mind. In this respect, the challenge of whole-brain emulation may not be simply to reproduce enough of the brain, but to reproduce the right temporally self-renewing process through which a brain becomes a mind.
6. A Research Program: Comparing Emulations With and Without Iterative Continuity
If iterative mental continuity is genuinely important for the emergence of mind, then it should do more than provide a philosophical interpretation of whole-brain emulation. It should generate a research program. The most direct way to test its importance would be comparative: build or evaluate brain emulations that differ in the degree to which they preserve temporally extended continuity, and examine whether this difference affects cognition, coherence, and the stability of ongoing mental organization. In this way, the theory becomes empirically useful. It identifies a variable that can be manipulated, measured, and related to the success or failure of emulation.
One possible strategy would be to distinguish between two broad design approaches. The first would emphasize bottom-up structural and mechanistic fidelity without any explicit theoretical commitment to preserving iterative mental continuity. Researchers would reproduce anatomy, connectivity, physiology, and local dynamics as accurately as possible, and allow larger-scale mental organization to emerge, if it does, from those details alone. The second approach would preserve as much biological realism as possible while also treating iterative continuity as a design target. In that case, the emulator would be evaluated and adjusted not only for structural accuracy, but also for whether successive active states exhibit the degree of overlap, persistence, and recursive updating expected of a coherent mental stream.
The value of this comparison is that it could reveal whether temporally structured continuity is merely an incidental byproduct of successful emulation or a constitutive requirement. If both systems performed equally well and exhibited similar internal organization, then the continuity hypothesis would be weakened. If, however, the continuity-guided system showed stronger coherence, more stable goal maintenance, more intelligible self-report, richer multistep reasoning, or better preservation of contextual integration across time, then the theory would gain support. The point is not that researchers must know in advance exactly how mindedness should look, but that they can compare systems for features that appear relevant to mentality.
Several possible evaluation dimensions follow from this framework. One is continuity of content across time. Successive active states could be analyzed for the extent to which relevant information persists long enough to guide subsequent processing. Another is cognitive build-up: the capacity to sustain and elaborate a line of reasoning over multiple stages without losing intermediate structure. A third is goal persistence: the degree to which distal purposes continue to organize behavior across interruptions and distractions. A fourth is narrative or interpretive coherence: whether the system maintains a stable situational frame within which new events are understood. A fifth is the progressive transformation of internal models or imagery: whether internally generated representations evolve incrementally as though they are part of the same stream. Taken together, such measures could help distinguish an emulator that merely produces local successes from one that sustains an organized mental process.
This research program also opens the possibility of studying graded failure. If continuity is progressively weakened, disrupted, or made excessively rigid, one might expect not an all-or-nothing collapse but a family of characteristic distortions. A system with insufficient overlap among successive states might display fragmentation, weak context retention, shallow reasoning, or unstable self-organization. A system with excessive overlap might become repetitive, perseverative, or unable to update flexibly in response to new input. These possibilities are important because they move the theory beyond the simple claim that continuity either exists or does not exist. They suggest that mental organization may vary systematically with the quality of temporal inheritance across states.
Importantly, this comparative framework need not wait for full human-scale whole-brain emulation. Smaller-scale models could already be informative. Researchers could test continuity-preserving versus continuity-poor architectures in domains involving working memory, multistep planning, mental simulation, sequential interpretation, or persistent internal modeling. Even if such systems fall far short of full emulation, they could still help establish whether partial state overlap and recursive updating contribute to the sorts of coherence that human cognition displays. In this respect, the research program can begin before the ultimate technological goal is reached.
The framework may also help clarify what kinds of emulation success are being pursued. A system might simulate neural tissue, reproduce selected brain dynamics, or generate outwardly plausible behavior without necessarily preserving the internal stream structure characteristic of a mind. Comparative testing could therefore distinguish tissue simulation, behavioral approximation, and stream-preserving mind emulation as different achievements rather than treating them as interchangeable. This would be especially useful in a field where the term whole-brain emulation can obscure important differences between reproducing a brain-like mechanism and recreating a mind-like process.
There is a further methodological advantage. If iterative continuity proves experimentally useful, it could help direct measurement priorities in future brain-mapping efforts. Instead of assuming that every available detail must be captured at maximal fidelity, researchers could focus more attention on those temporal properties most predictive of coherent mental organization. The result would not necessarily be a simpler science, but a more targeted one. The emphasis would shift from indiscriminate completeness toward principled relevance.
For these reasons, the hypothesis that whole-brain emulation must preserve iterative mental continuity is not merely a speculative addition to existing theory. It points toward a concrete comparative science. Emulations built with and without explicit preservation of temporal continuity could be placed side by side and evaluated for the extent to which they sustain thought as an organized stream rather than a series of disconnected states. If one approach consistently yields greater coherence, continuity, and integrated cognition, that would provide meaningful evidence about what a brain emulation must reproduce in order to become a mind.
7. Partial and Failed Emulations: What Happens When Continuity Breaks Down
One advantage of treating iterative mental continuity as central to mind emulation is that it allows failure to be analyzed in a more nuanced way. An emulation that does not preserve the right temporal organization need not be assumed to be wholly inert or wholly successful. It may instead produce partial, distorted, or unstable forms of mentality. This is important because whole-brain emulation will likely not arrive in a single leap from nonfunctioning model to flawless mind. More plausibly, there will be intermediate systems that reproduce some aspects of cognition while failing to sustain the kind of temporally integrated stream characteristic of a coherent conscious agent.
A snapshot-centered view of emulation makes such partial failures difficult to conceptualize. If one assumes that mind emerges automatically once enough structural detail is copied, then unsuccessful models may be regarded simply as insufficiently detailed. But this tells us little about the nature of the failure. By contrast, a continuity-centered framework suggests that different kinds of temporal breakdown may yield different kinds of mental impairment. The crucial question is not only whether the system functions, but how its states inherit from one another, how long relevant content remains active, and whether successive states form a sufficiently unified stream.
One possible failure mode is fragmentation. If successive active states share too little content, the system may lose contextual continuity from one moment to the next. Such an emulation might still perform isolated tasks, respond to immediate inputs, or generate locally appropriate outputs, yet fail to maintain a stable line of thought. Its internal life, if any, may resemble disconnected episodes rather than a sustained stream. Goals may dissipate quickly, interpretive frames may collapse before they can guide ongoing processing, and multistep cognition may be shallow or brittle. From the outside, this could appear as inconsistency, distractibility, or difficulty building complex representations over time.
A second failure mode is excessive rigidity. If too much of the previous state is carried forward and insufficient updating occurs, the resulting system may preserve continuity at the expense of flexibility. Instead of fragmentation, the problem becomes perseveration. The emulator may remain stuck on prior representations, fail to incorporate new information effectively, or become locked into repetitive loops of interpretation and response. A mental stream requires not only persistence but controlled transformation. Where persistence overwhelms renewal, thought may cease to progress even though continuity remains superficially intact.
A third possibility is phenomenological thinning. The system may preserve enough continuity to support a degree of stability and task performance, but not enough richness of overlap and integration to generate the density or fullness associated with ordinary conscious experience. If such a case were possible, the emulation might show coherent local behavior while lacking the depth, texture, or sustained integration of a fully realized mental stream. This possibility is especially important because it suggests that the path to successful emulation may not be binary. Systems may exist that are cognitively competent in limited ways while still falling short of robust experiential organization.
Another possibility is instability of self-organization. A functioning mind appears to maintain a relatively stable sense of orientation across time, preserving not only perceptual and mnemonic content but also goal structure, affective tone, and some degree of self-related continuity. If iterative continuity is disrupted, the system may fail to stabilize these organizing factors. The result could be abrupt shifts in priorities, weak maintenance of self-models, disjointed narrative interpretation, or an inability to sustain a consistent perspective through time. Such an emulator might produce fragments of person-like behavior without preserving the unified organization ordinarily associated with a person.
These graded failures matter scientifically because they render the theory more predictive. If continuity is truly important, then manipulations that reduce, exaggerate, or destabilize overlap among successive states should not merely reduce performance in a generic way. They should produce characteristic forms of breakdown. One would expect diminished context retention, impaired multistep reasoning, weak thematic continuity, or unstable imagery transformation when overlap is too low, and perseverative or pathologically sticky behavior when overlap is too high. Such predictions make the framework more than a conceptual gloss. They turn it into a lens through which the quality of emulation can be interpreted.
This perspective also helps distinguish different ambitions in brain emulation. A system may succeed as a neural simulator while failing as a mind emulator. It may reproduce many local biological properties without sustaining a coherent mental stream. Conversely, a system might preserve some global continuity while remaining too crude in other respects to count as a faithful emulation of a particular human mind. Recognizing partial and failed emulations therefore encourages a more discriminating vocabulary. Rather than asking simply whether a model is conscious or not, or whether it is an emulation or not, researchers may need to ask what kind of continuity it preserves, what kinds of cognition it supports, and where along the spectrum from fragmentation to coherence it lies.
For whole-brain emulation, this is a significant shift. It implies that the road to artificial mind may be populated by systems that are neither empty mechanisms nor full persons, but intermediate cases whose internal stream structure is degraded, unstable, or incomplete. Such cases would raise both scientific and ethical questions. More importantly for the present argument, they would provide evidence about which aspects of temporal organization are essential. If success and failure vary with the quality of iterative continuity, that would strengthen the case that a mind is not merely a reproduced structure, but a process whose continuity must be actively maintained.
8. Implications for Identity, Uploading, and AGI
If whole-brain emulation must reproduce iterative mental continuity, then the consequences extend far beyond the technical problem of simulation. They reach into questions of personal identity, mind uploading, paused and resumed emulations, branching copies, and the architecture of future artificial minds. The issue is no longer simply whether enough information has been captured, but whether that information has been organized into a temporally self-renewing stream capable of sustaining a mind in the fullest sense.
This has immediate implications for identity. Many discussions of mind uploading assume that if a sufficiently accurate informational duplicate of a person can be created, then the person has in some relevant sense survived. Yet this assumption becomes less straightforward if personhood depends not merely on informational similarity, but on an ongoing chain of iterative continuity. On such a view, what matters is not only that a later system resembles the earlier one, but that the later state grows out of the earlier state in the right temporally continuous way. A copy that reproduces memories, dispositions, and personality traits may still leave open the question of whether the original stream of mind continues, or whether a new stream merely begins with similar informational content.
This concern becomes especially acute in scenarios involving interruption. Suppose an emulated mind is paused, stored, and later restarted. If mentality depends on an ongoing process of partial inheritance across successive states, then a pause may represent more than a harmless delay. It may raise the question of whether continuity has been preserved or severed. A restarted system might resume with the same data, the same memories, and the same self-description, yet still differ in whether it constitutes the continuation of the same mental stream or the initiation of a new one. The present framework does not by itself resolve this issue, but it sharpens it by shifting attention from static identity conditions to dynamic continuity conditions.
The problem becomes even more difficult in cases of branching. If one mental stream is copied into two or more emulations, each of which begins from the same stored information, then informational similarity alone cannot determine which, if any, is the continuation of the original. Each branch may preserve the same memories and personality at the start, but the branches immediately diverge into distinct streams. From the perspective advanced here, personal identity may be less like the possession of a fixed informational pattern and more like the continuation of a particular temporally extended process. If so, duplication does not preserve a single identity so much as create multiple successors that share a common past but not a common stream.
These considerations suggest that whole-brain emulation may force a distinction between two very different notions of survival. One is informational survival, in which a person’s memories, dispositions, and cognitive structure are copied into a later system. The other is stream survival, in which the person’s temporally self-renewing mental process continues without losing the pattern of iterative inheritance that constitutes the original stream. The two may often be conflated in popular and philosophical discussions of uploading. The present argument suggests that they should be separated. A system may preserve much of a person’s information while leaving unresolved whether it preserves the person in the stronger sense.
The implications extend beyond personal identity to the future design of artificial intelligence. If iterative mental continuity is necessary for recreating a biological mind, it may also be one of the conditions for constructing a genuinely mind-like artificial system. Many current AI architectures are extraordinarily capable, yet they may not preserve the sort of temporally extended continuity that characterizes conscious thought in humans. They process, predict, and generate, but they may do so without the sustained active inheritance that binds one mental state to the next into a unified stream. This suggests that whole-brain emulation and AGI research may converge on a common design question: what kind of temporal organization is required for a system to become more than a sequence of competent outputs?
If that is correct, then the lessons of emulation may generalize. Iterative continuity may function not only as a constraint on copying minds, but as a blueprint for building them. Artificial systems designed to preserve partial state overlap, recursive updating, and the controlled carryover of goals and interpretations across time may come closer to the organization of a true stream of thought than systems built around more episodic or stateless processing. Conversely, systems that are deliberately prevented from sustaining such continuity may remain powerful tools while avoiding some of the conditions thought to underwrite conscious mentality. In this way, the theory bears not only on what must be recreated, but also on what may be regulated or withheld in future AI design.
The broader philosophical implication is that mind may be a fundamentally processual phenomenon. It may not reside in a frozen arrangement, nor in a mere inventory of information, but in the continuous regeneration of a structured mental stream. This does not mean that structure and information are unimportant. On the contrary, they are the material and computational basis out of which the stream is formed. But they may be insufficient unless they are caught in the right dynamic pattern of temporal inheritance. A mind, on this view, is not simply something that exists at an instant. It is something that must continually become itself.
Whole-brain emulation therefore poses a deeper challenge than is often recognized. To emulate a brain in the fullest sense may require more than reproducing its parts and more than approximating its outputs. It may require recreating the temporally self-renewing continuity through which a person’s thoughts, goals, interpretations, and experiences are carried forward from one moment to the next. The same insight may ultimately guide the development of artificial minds and the evaluation of uploaded ones. If so, the future of emulation and AGI alike may depend on recognizing that a mind is not merely an arrangement of elements, but an ongoing stream whose continuity must be preserved.
Conclusion
Whole-brain emulation is often imagined as a triumph of detail. Once enough of the brain’s anatomy, connectivity, physiology, and moment-to-moment activity are captured, mind is expected to emerge from faithful reconstruction alone. This article has argued that such a view may be incomplete. A mind is not merely a complicated structure or even a sequence of accurate neural states. It is a temporally extended process in which successive states partially preserve and partially transform one another. To emulate a brain in the fullest sense may therefore require reproducing not only its structural organization, but also its iterative mental continuity.
This claim matters because continuity may be central to both consciousness and cognition. The stream of awareness, the retention of context, the maintenance of goals, the progressive elaboration of thoughts, the coherence of narrative understanding, and the gradual transformation of mental imagery may all depend on the partial overlap of adjacent states. If this is correct, then whole-brain emulation cannot be judged solely by snapshot fidelity, no matter how fine-grained that fidelity becomes. It must also be judged by stream fidelity, by whether it reproduces the temporally self-renewing process through which a living mind is continuously recreated.
The distinction is methodological as well as theoretical. In complex systems, reproducing many components with great precision does not always guarantee that the defining emergent phenomenon will appear. For that reason, bottom-up emulation may need a conceptual guide. Iterative mental continuity has been proposed here as such a guide, a north star for identifying which temporal dynamics are most likely to matter. It does not replace structural realism, but helps orient it. The deepest target of emulation may not be the isolated state, but the lawful succession through which one state becomes the next.
This perspective also turns the theory into a research program. Emulations constructed with and without explicit preservation of iterative continuity could be compared for differences in coherence, multistep reasoning, goal persistence, self-report, and the stability of ongoing mental organization. Partial failures could be analyzed not simply as insufficient simulations, but as systems whose continuity is fragmented, rigid, or phenomenologically thin. Such comparisons would help determine whether temporally structured inheritance is merely a byproduct of mind, or one of its constitutive conditions.
The broader implications are substantial. If iterative mental continuity is required for genuine mind emulation, then questions of identity, uploading, copying, pausing, and branching must be reconsidered in processual rather than purely informational terms. The same may hold for artificial intelligence more generally. The principles required to emulate a biological mind may also illuminate what is required to build a nonbiological one, and perhaps what should be withheld if we wish to build powerful systems that remain tools rather than subjects.
At its core, the argument of this article is simple. A brain may be less like a static object and more like a flame: not a thing that merely exists, but a process that must continually renew itself. If so, then whole-brain emulation will succeed not when it merely reconstructs the ingredients of mind, but when it recreates the ongoing continuity through which a mind becomes itself from one moment to the next.
No comments:
Post a Comment