Zombies of the type encountered in
philosophical studies of the mind, such as the ones popularized by David
Chalmer’s, would not be physically possible without an incredible amount of
compensatory programming. I believe that the processes that allow us to have
consciousness are key to our functioning, and to intelligent behavior, so to
remove them and yet preserve the other functions would be implausible. Every way that
I can conceive of to design such a zombie, involves creating an agent that is
in fact much more complicated than a human. By this I mean that it would have
to have more processing resources and a larger memory. Taking the entity out of
the agent requires the agent to be capable of performing all of the same
functions without the shortcuts made possible by sentience and self-awareness.
I think that philosophers that discuss these hypothetical zombies have not
considered this contention, mainly that: consciousness acts as a shortcut to
devising behavior, and a zombie without consciousness (without Gazzaniga’s
“interpreter” and without “mental continuity”) would necessitate a tremendous
number of if-then rules that could allow it to make the same inferences and
decisions that consciousness allows us to make. Programming the computational
architecture for a zombie-like, artificially intelligent agent capable of
performing human behaviors, in human-like ways, without any conscious insight,
would necessitate a battery of rules and subsystems to instantiate simulacra of
wanting, liking and feeling. To identify all of the various components of
desire and sensation, and to coldly mimic them without actually replicating
them would be incredibly complicated. In fact, I believe that to go this route,
which some A.I. scientists are currently doing in my opinion, would be far more
difficult than creating an A.I. agent that does use intentionality, mental
continuity and consciousness.