[
Back to Issue Features ] "Bat,
Brains and Implanted Thoughts": The Perpetual Life of Philip K. Dick Richard
Wirick Most
people know Philip K. Dick for stories that inspired movies such as Blade
Runner and The Matrix, films that defined the language of latter
day science fiction cinema and its extraordinary advances in special effects.
After writing hundreds of stories, numerous novels, essays and screenplays, Dick
joined those writers who turned aside from their work and made an eccentric public
life their final art, though his work as a serious writer was never that far from
serious readers’ attentions.
He
died in 1982, broke, decrepit, most probably mentally ill. He had been manhandled
by publishers, agents, studios. He loved to quote William Burroughs’ definition
of the paranoid, applicable to himself then as well as to reams of his characters:
“He’s the one with all the information.” The
premises of many of Dick’s stories entailed problems that I and my fellow
philosophy students and teachers at Berkeley had wrestled with the decade prior
to his death. Two of these questions have occupied the philosophy of mind since
at least the Second World War, and have galvanized the attention of philosophers
and neuroscientists right in tandem with their hold on the imaginations of filmmaking
acolytes throughout the 80s and 90s. The
first question served is the sole vibrating note of Blade Runner’s
entire narrative tension: do robots, computers or zombies [viz. “Replicants”]
in any sense “think” or possess something akin to human consciousness?
If so, how could that be tested and what would count as proof or lack of proof
for that hypothesis? Lovers of the film will recall Harrison Ford’s final
decision to scrap his mission and explore the artificial romantic mental states
beamed to him by Sean Young. They are compelling enough to make this writer fall
in love with her at the time, and leave it to professionals of one or another
ilk to distinguish from the real thing. The
second question, explored more in Dick inspirations like the Matrix series and
“Minority Report,” is more subtle: even if one grants that we
humans have consciousness, how do we know it was not something implanted
in our brains like a computer program, complete with a false but plausible “memory”
and reasonable expectations for a similar future? The
first question is easier to answer than the second. It is fairly clear now that
machines — fear not — can in no sense have conscious states similar
to those of humans or other sentient animals. The difference between thinking
and consciousness is important, and the fact that computers or robots can perform
the former but not the later is not merely academic or semantic. It is valuable
to our understanding of the world and our concept of rights and duties —
what we feel is ethically owed by one person to another. Don’t
get me wrong: machines, including robotic machines performing computations, can
think thousands of times as fast as human brains. They can calculate and retrieve
stored data (“memory”) far more rapidly than our processes would allow.
The ability of machines to perform these mind-like functions has resulted in a
great leap forward for human intelligence, and has made life immeasurably better
and easier for all of us. But
consciousness has certain features that make it something else, something that
can only be experienced in a human or animal brain. The object experienced
is not what is unique; rather, it is the experiencing process itself. It stands
alone in the singularity of its features, and thus has an irreducible quality
that cannot be assembled from something else or reproduced. In fact, philosophers
of mind have come to call consciousness’s objects qualia, borrowing
from Medieval church philosophers’ term for the aspects or characteristics
of a thing. Consciousness
cannot be accounted for with most, or perhaps any, descriptions of it. One can
explain what causes the brain processes of a bat as it flies along sending out
and feeling the return of its radar, but that (in an example given by one philosopher)
would not give us the qualitative experience of bat-ness. Our own experience provides
material that enables us to imagine the consciousness of a bat: we can, however,
limited our experience, think of what it would be like to have fur and webbed
feet, extremely poor vision, to sleep upside down and eat insects. But this would
tell us only what it is like for us to behave like a bat, which is not what we
want. We wish to know what it is like for a bat to be like a bat. But the limits
of human experience make this almost impossible. So
the experience of bat consciousness, like the experience of human consciousness,
is irreducible, unique and incapable of production. Its most important features,
what it would be like to be that way, is exactly what is most accessible
about it. (We could imagine a computer program that approximates to the feelings
of consciousness as closely as software designers can get it, and some have come
pretty close. But there would be sense of whether the program is actually conscious
unless we were the computer running the program.) So all the descriptions in the
world come up against the bare wall of consciousness irreducibility. The particular
feel of bat thoughts and bat sensations? Forget about it. All we can do is pretend.
We can never partake. Now
back to Dick and his “Android Sheep” story, and its adaptation in
“Blade Runner.” For Ford and Young are flying off together at the
end of the film, she as slim as a whippet in her lovely red dress and Ms. Marple
Replicant hair bun, she can have every conceivable problem-solving and computation
ability her creators what to bestow on her. Maybe she can man the aircraft better
than Ford. She may be capable of calculating the velocity, altitude and clocked
nautical miles of the ship ten times — a hundred times — as well as
her human passenger. But without a brain inside her, she in no sense possesses
consciousness. Her mental events have every feature of conscious events except
the feel of them. She lacks the qualia of human-ness. Ultimately,
it may not make that much difference, even if their relationship is meant by its
creator to flourish. Young-the-Replicant could perform any conceivable acts of
outwardly recognizable love, loyalty and cruelty that mark any relationship. The
would simply be performed without the first-order level of awareness that we have
characterized as consciousness — the textual vividness that Robert Nozick
called consciousness’s felt quality. Ford would never have to know,
and he would have no evidence — short of mangled descriptions of her own
thoughts — to suspect she was anything other than human. None
of the above is meant to say that conscious agents cannot act in non-conscious,
automatic ways in much of their experience. The drudgeries of daily life, our
programmed manner of conducting it, are things that make humans a most fertile
field in which to plant the race of Replicants. The Replicants in the movie have
sensory motor systems that carry out forms of behavior in a non-conscious way,
but only because the Replicants have no consciousness. We, on the other hand,
can possess the magic stuff and still operate without it. Many mental processes
going on in conscious subjects are entirely non-conscious. Both human and Replicant
reach for their keys in the same way, affect certain body postures or run after
an object that might get away all in the same manner. The reason conscious agents
like humans don’t think these procedures through is that it would
be inefficient to bring the behavior to the level of consciousness. We perform
them without being conscious of them, though we could be if we wanted to.
* * * So
machines can’t think. Not even computers. But forget for a minute the fact
that when machines compute and predict, they do it in a way that doesn’t
involve the dimensionality and particularity — the consciousness
— that makes up our thoughts. Dick’s stories were more interested
in having us disprove the second question we asked. What if all of our experience,
the totality of our consciousness, were something inserted into our minds like
an implant? Such a “consciousness chip” would contain an entire false
memory system as rich or bare of experience as could be designed. We would possess
reams of experiences we never had but which we accept as our proper history of
consciousness, precisely because they have the vividness we just described
and because we have nothing else to compare them to. In The Three Stigmata
of Palmer Erdrich, Dick posited earthmen on Mars who ingest a drug that makes
them hallucinate an entire life back on earth, “Perky Pat Layouts”
containing surfers and Barbie dolls as real to them as the “felt life”
they currently experience trying to colonize Mars. If
drugs can induce a state of consciousness, why can’t software designers
do drugs one better, producing vast false histories and personalities built out
of them, slide by slide and flash by flash, from the whole cloth of binary instructions? Philosophers
have come up with scenarios that are even chillier. Hilary Putnam and Robert Nozick
offer the notion of brains bubbling in vats, capable of selecting this or that
experience as though it had been lived by a human containing that brain. We would
like to think that we’d rather actually write a great novel or actually
save some tsunami victims, as opposed to plugging into a canned virtual presentation
of our performing these tasks. The moral high ground, the morally attractive choice,
is to actually perform the experience we desire rather than just, well, experience
it by downloading it into ourselves. So
what saves us, really, from the “false” true experiences as opposed
to the “true” true experiences? Certainly partaking of either of the
two, without standing back with any kind of detachment, makes them seem identical,
and identically attractive. Hilary Putnam’s Howison Lectures, one of which
I attended as a barely conscious undergraduate, offered the essential scenario
of the “Matrix” films nearly thirty years ago: brains in a vat hooked
up to a program that “gives [them] a collective hallucination, rather than
a number of separate hallucinations.” What happens is that since semantics
derive significance from a community, the vat-brains’ reference to, say,
climbing the side of a building like Spiderman means simply the image of such
Spiderman-like climbing. Their reference is not to the actual behavior of going
up a building wall with suction cups or climbing pitons. The vat-brains are speaking
with a vat-language entirely different from ours, and who are we to say what they
perceive is not as “real” as what we perceive? The
way out of this trap is in several steps. The first is to grant that the real
consciousness (consciousness with its higher-order reality) comes with several
aspects, several indicia of reliability that its creations do not possess. First,
we can get a certain common sense assuredness, however, weak, that the imagined
matter flows from the imagining entity. The thought or vision or creation
will seem to spring from or emerge out of the creating thing,
and not the other way around. Again, we have a conviction, an intuition, that
the create world issues from an effort we expend, a natural impetus that seems
impossible to implant or reproduce. The connection between the two always has
this cause and effect relationship, one which never seems to run in the opposite
direction even in the most extreme states of intoxication. (There are moments
when Ford’s instincts show themselves, and appear to be something Replicants
do not have, or have only weekly or woodenly.) Another
thing about true conscious states as opposed to non-conscious ones is the feature
of durability and continuity. A dream, almost by definition, comes to
an eventual state of evaporation. The qualia of an imagined event seems
to peter out, to run down. It cannot run the endurance lap we require of almost
all of our experiences in life. We might say, again with Robert Nozick, that the
“real” thing is the thing that stays constant through innumerable
disruptions, that remains steady through a wave of “Lorenz transformations.”
Consciousness is invariant through all the variations that make up its perceptions
and creations. So
the real thing, the creating thing, has the overall character of invariance that
lasts through each of its extrusions. It also has the ability to link each one
of its creations to the previous one, not in terms of theme, but certainly in
terms of origin. Fireworks, however more beautiful than the ground they rise up
from and illuminate, still have to be fired by someone. We never have the slightest
notion that the ground rises up out of the pyrotechnics. And this is not just
a matter of faith or repetition. It is the most common and reliable feature of
experience: it simply happens that way and we have a well-grounded assuredness
that it will continue to happen that way. The
ultimate advantage of grounding ourselves in the “real” real thing,
the creating thing, is the concept of freedom it brings to us. If consciousness
is a creating instrument, one over which we (usually) have control, then we really
are free agents with all the responsibility that entails. The mind’s products
are often erratic, amorphous. Our mental creations run in all directions and seem
in danger of taking on their own life. But the creative force itself keeps hold
of it and reels it back. The ability to go after our creations, to reign them
in and control them, assures us that we have far greater power over our destinies
than almost anything else in the world, not just non-conscious objects but non-self-conscious
sentients like our brother animals. Whatever vats are hooked to one another and
whatever drugs ingested, we can always reverse course with a true effort of will.
Consciousness is the borderguard against the self’s enslavement. If the
creating agent can never itself be created, then it can never be constrained. Consciousness
does not need to struggle to free itself, but its objects can seem maddeningly
fleeting. Everything it attaches itself to can be ephemera. Pan again to Ford
and Young in the cockpit of their hovercraft. Ford is not sure how long he’ll
have the Replicant, as they all have pre-designated, programmed extinction dates.
He looks into her eyes, knowing she’d be a lot, perhaps nearly everything,
to lose. He shakes away the thought and accelerates. Richard
Wirick is the author of the novel One Hundred Siberian Postcards (Telegram
Books). He has been published in Paris Review and The Nation.
He practices law in Los Angeles.
|