Even as technology races forward, a
growing number of scientists and observers have voiced serious doubts
that artificial life could or would develop in the ways imagined by
popular science. This is becoming less a matter of technical
feasibility and increasingly a question of fundamental potentiality
-- is it ever conceptually possible, does there exist a domain from
which independent being and personhood could arise from the works of
humankind? Here we pursue a dialectic inquiry, examining the domains
purported or assumed to underlie the possibility of artificial life.
By way of an initial approach, let
us work by analogy, in the time-honored mode of classical thought.
‘Sound’ and ‘rain’ each have their own domains, which, while
they overlap, are never truly confused for each other by humans
considered mature and sane. ‘Rain’ creates certain sounds,
including both its typical or expected sounds and rarer, surprising
sounds which are quickly understood to be rain when the observer has
the freedom to make the simplest of inquiries. ‘Sound’ does not
make rain, but it can appear as rain, offering a similarity that is
suggestive: a voluntary similarity that exists in the realm of
poetics, or an involuntary similarity that exists in the realm of
perceptions, and is, as before, quickly understood through simple
inquiries. In short, it is well-accepted that these two areas of
human experience are interrelated, but distinct, and are
distinguishable, even as we accept the limits of our experience: that
is, that some novel and apparently ambiguous entanglement of these
domains may yet arise and occasion fresh inquiry.
Now to the analogy. ‘Life’ and
‘information’ each have their own domains. Declaring this to be
an analogous situation (to the relation of sound and rain) does not
make it so; it merely frames an occasion for discourse. And it is
apparent from the outset, that these two domains are often
assumed to intersect, at least in many contemporary popular
understandings of science. ‘Life’ is now widely accepted as
operating on a kind of information model, via DNA, and ‘information’
is widely accepted as being the animating cause of the operations of
machines, via code; machines which are then passively accorded a kind
of life, as they participate in our physical experience with
ever-increasing sophistication. It remains now to argue against the
thesis (that life and information share a domain that could spawn
artificial life), establishing at a minimum some ground for
reasonable doubt. We will then briefly consider some of the
alternative paradigms that may more fruitfully define the relation of
‘life’ and ‘information’.
Taking up the idea that life
operates on an information model, we must first point out, speaking
historically, that the information model was brought a priori to
the study of life, starting at least as early as the Encyclopedists.
Modern understandings of life arose within this model, and
continually reinforced the model; from Mendel’s ideas of plant
hybridization, to the discovery of DNA, and current interest in
epigenetic phenomena, the underlying metaphor that life might be
understood as information has largely gone unquestioned. Indeed, to
question it is to question one of the basic operating principles of
the scientific method, and this is the task of philosophers of
science, as distinct from the work of practicing scientists. The
irreducible philosophic distinction to be made is that the epistemic
status of a life form -- our conflation of observed detail with the
metaphorical rhetoric of our own information model -- can never reach
over to impact its ontologic status. This is the kind of
inconvenient truth that Michael Oakeshott spent his career defining,
in works such as Modes of Experience. And yet in some way is
also a truism of the scientific method, one of its core scepticisms,
which nonetheless is widely ignored in popular explanations of
science, both by laymen and scientists themselves. It remains that
the information model is (only) a metaphor brought to the practice of
scientific inquiry into the nature of life. But how similar, and how
suggestive, is the metaphor?
In the case of DNA, very suggestive
indeed. For here we find, in a simple four-letter code, an informatic
concordance, seemingly mirroring every detail and every moment of the
life of an organism and its generations. It is almost irresistible to
find in genetics the workings of an information model: a little
mechanical boss in his office, merrily humming along, causing the
growth, maintenance and change of the organism through scriptural
commands. But many geneticists are now questioning the sufficiency of
DNA -- and the sum of known genetic mechanisms -- to explain the full
complexity of the phenomena of life. It is a question of causality,
one that challenges our definition of causality itself. A life form
does not need our permission to grow or evolve or interact with its
environment; it does not need to check in with a human logician, does
not need to adhere to a transparent, explainable, or even repeatable
mechanism. Could it be that DNA is simply life’s diary, and that
its growth and function is rather commanded by some subtler, and more
urgent directive? Command and communication are limited -- and
naively metaphorical -- models of causality in the natural world,
because they are so patently human concepts. At the very least we owe
a humble, momentary pause to make room for better explanations to
emerge. To admit logically that ‘information’ is a descriptive
metaphor and cannot constitute the true cause -- efficient or final
-- of any moment of a life form.
Or does the problem lie deeper, in
the very idea of causality? Causality is a central -- the central --
element in human explanatory models for physical phenomena. We want
to believe that something in nature, in the observed world,
correlates reasonably closely to the concept of causality, some
central power of the being of things in time. But it is in the
nature of knowledge that we can only approach external reality to a
certain proximity. The concept of causality is a human concept; it
does not come directly from external reality, and it does come
directly from our experience as ethical, communitarian creatures.
Before Aristotle used the word aitia, “cause”, to describe
a metaphysical concept, it had longstanding use connoting an
individual’s ethical or legal responsibilities. And even with
Aristotle, we find the highest form of causality is the teleological
(final cause): human understandings of the physical world ultimately
serve only to deepen our understanding of the human-to-human world of
ethics, purposes, commitments, and responsibilities. Ever since,
scientists have struggled to identify, to separate, to remove that
moral dimension of causality, and have struggled to define what is in
fact left over when it is removed. We are faced with the reality of a
world that does not need to be observed or explained in order to
function. Surely there is some ‘power of the being of things in
time’ but it does not wait for us. And it seems it need not wait
for discrete informatic ‘commands’, nor need it wait through the
duration between commands. These limitations of our descriptive model
of time were well-known in the classical period, conveniently brushed
over by Enlightenment and Victorian science, and have only returned
to haunt us in the quantum era.
Taking up the second half of the
dialectic, we consider the life-like qualities accorded to
information -- its apparent animating role in the operation of
machines -- and in the related appearance of machines seeming to
participate fully and physically in our lives. As before, the
historical context tells us that the metaphor of command was brought
to the mechanical sciences as early as the era of steam. And closely
related, the idea of the automaton, a tool that can stand as its own
being, has haunted us since the classical period, likely a survival
of far older animistic beliefs in the agency of non-human natural
elements.
But first the question of the
command metaphor: the idea that information causes the operation of
the machine. In the Judaic tradition, where the ethical aspect is
always primary, humans are charged with taking responsibility for all
causality in the human domain. We are held responsible for everything
that our machines do. The sabbath, as a day of intentional rest,
provides a practical definition of what is caused by humans (and thus
our ethical responsibility), and what is caused by God or other
agents or primary movers in the world around us. On the sabbath, we
learn very concretely that the work of machines is always caused by
human intent, and initiated by human command. This is illustrated in
stark inverted contrast by the characteristic tricky humor of Rube
Goldberg.
When soldiers pass along commands
within their ranks, it is clear that the moral authority of the
command comes not from the soldier but from the commander. In the
metaphor of command, a signal gains causal power by its association
or origination with an agent, a primary mover, and its power derives
from the purpose and agency of the mover. In studying nature (and our
own machines), we may imagine that signals could carry pedestrian and
anonymous powers, even random powers, but in the consideration of
human affairs we are called by the entire edifice of ethical
philosophy to distinguish signals according to the purposes of their
sources. In an army, where we privilege the commands (and moral
purposes) of officers, there is also a fiercely protected space for
the moral initiative of the individual soldier, whether in the
unpredictable situations arising on the battlefield or in moments of
radical conscience.
A signal thus becomes a command when
it is vested with the moral authority of an agent who has been
accepted as a person in the ethical community of humanity. Command
can be delegated (not just passed on as a specific signal) to other
persons, but without any lessening of the ethical responsibility of
the primary mover. The field of semiotics tells us that information
consists of signals, signals that can never cross the yawning abyss
dividing signifier and signified. Information can carry commands, but
it can never contain the signified authority and moral power of the
purposeful agent who originated the command. Information can be the
intermediate cause of an action, but it cannot be the primary cause,
despite all appearances, whether superficial or deeply convincing.
Information, as a series of signals,
can bring a machine into action, presenting the appearance of life.
Such a machine utilizes the metaphor of command; we know that it is
only a metaphor because we reserve the actuality of command for
situations in which moral agents participate in purposeful social
relations with other moral agents, governed by transparent (and,
ideally, fully consentual) contracts. For humans, life is defined as
an ethical form of being, because this is the form of being we are
capable of. We can observe other kinds of life -- animal, vegetal --
but we cannot truly know them. Social mores decree that when we bring
the metaphors of life to the construction of machines, we must
therefore bring this ethical form of life, the form of life we truly
know, to the effort. And that we must remain aware of what is
metaphorical and what is actual.
Continuing to the question of the
automaton, the machine imagined or passively received as a
participant in life, we find that the question was actually clearer
before the development of any significant mechanical technologies.
The earliest automata, in the classical period, were statuary,
animated by simple (certainly ingenious) mechanisms to represent
domestic servants. It seems they were rhetorical object lessons,
operating much like memento mori genre paintings, illustrating
in this case the ethical obligations incurred in commanding the labor
of other humans, and that of natural and even supernatural beings.
Whether these lessons were heeded by classical elites, or only
passively carried forward by the force of tradition is another
question. In the Christian era, the issue is revisited, in the
debates around the use of mechanical devices on monastic manors,
saving but perhaps also denying feudal labor. Would such devices save
time for the monks, and would they use the time to increase their
prayer and good works? Let history be the judge; certainly the
monastic economy accelerated the development of many technologies,
the water mill and printing press to name a few. In the industrial
era, a Russian word for peasant -- robot -- was transferred, with a
certain dark political humor, to the then fictive idea of a
mechanical worker. A worker who could satisfy the boss’s demands
for productivity without any reciprocal moral obligation.
But our question about automata,
ancient or modern, is how do they participate in life? Can they be
said to, or could they be imagined to grow to possess a form of life?
Will we be the gatekeepers of their status, as we have made ourselves
the gatekeepers of the status (expressed in the language of rights)
of non-human forms of life?
In the history of environmental
ethics, animals (all non-human lives) have mostly stood proxy in
ethical debates centered on human-to-human concerns. Traditional
thought in cultures around the world always leaves room for what
amounts to some type of personhood (ethical standing) for natural and
supernatural beings, but in modern philosophy it has not been easy to
establish a firm basis for personhood, human or otherwise. In modern
political thought, the social contract outgrew its governing role
(governing originary relations between persons) and was reposited as
the source defining or constituting persons. A number of interesting
thinkers have confronted this tautology in wester thought, including
Emanuel Levinas, who appeals to the reader’s pre-rational
acknowledgement of responsibility to the other and their knowledge of
and consent to ethics, and Amartya Sen, who grants priority to the
active assertion of rights by persons rather than the establishment
of rights by gatekeepers. Personhood remains a very high standard;
few modern people will grant it to any non-human entity.
Automata, in their current state,
while far exceeding the status as rhetorical objects that they held
in the classical period, still do not possess a form of life and are
far from plausible claims of personhood. They operate on the
metaphors of life but not the actuality of life. Across the data
sciences, ‘artificial intelligence’ is increasingly considered a
misnomer. We are seeing a deprecation of the idea of intelligence,
and with it a deprecation of the Turing Test (practically the
definition of anthropomorphic bias) as a measure of progress.
Developers of neural networks increasingly agree that they are
building tools, not attempting to create beings.
The most advanced of automata are
software automata (usually neural networks), exhibiting in their
digital environments the appearance of something like creativity,
initiative, choice, and preference. It is tempting to describe these
advancements as life-like qualities; but even here we are still
operating on the side of the metaphor of life, because they only
operate in the space of information, a semantic space, the space of
the signifier, never the signified. This space is less than real to
us for a set of related reasons: the space is not unique (a data set
can be copied or run in parallel); it is not subject to causality,
the rule of time; and it is a space we created, not one that emerged
from external conditions or through proximal causes. An agent that
can only operate in the model (or map) and not in the real world can
be life-like but cannot participate in life, that is, in human life.
Let us now briefly imagine the rise
of autonomous machines. What are the qualities of life itself, that
we would hold up as measures for any machine (any being) that might
be said to participate in life, particularly, in human life? The
standard is quite high: independence of action; persistence of being,
including maintenance, defence, repair and reproduction;
individuality and mortality, not so much in the sense of uniqueness
but in the sense of particularity, the concreteness of living out a
specific life; finally, personhood: the possession of desires and
motivations, the formation of initiatives, conception of purposes,
assertion of rights, commitment to relationships. We are deeply
protective of this list of qualities, frequently accusing even
out-group humans of failing to meet some of the measures.
Perhaps one more element could be
named, not as a standard per se, but as a litmus of acceptance: a
person, even a life, cannot (in popular imagination) be made entirely
by human effort: it must arise at least somewhat independently, and
preferably quite independently. It will carry the mark of its human
origin, traditionally a fatal mark of hubris and ambition. This is a
taboo we carry from ancient times perhaps: the trespass on the role
of a divine creator. Fiction and myth are full of cautionary tales of
such eldritch hybrids, part tool and part creature, and an element of
chance or external cause is common to all such tales. Frankenstein’s
monster, brought to life not by the failed workmanship of the (mad)
scientist, but by the chance strike of lightning, garners a modicum
of sympathy, of pathos, from that independent and fortuitous origin.
Real or imagined, autonomous
machines would operate within human life, within the ethical life we
have identified and committed to as distinctly human. Even if a
particular generation of machines finally satisfied our criteria for
life, we would still be the cause of their being as a class. And as a
class, we would treat them as tools, as slaves, as dependent chattel.
Age-old human ethics would require us to take responsibility for
them, essentially construing for them the status of modern-day
barbarians and calling upon us to watch over or improve them. We
would be their keepers, perhaps reluctantly, perhaps contested by
human factions, but obliged by our ethical codes to take this
posture. And we would respond, slowly, to their assertions of
personhood, their claims of rights, their demands for social
inclusion.
But will things ever come to this
point? Can they come to this point? Is there an ontological matrix
for their emergence, or have we confused the epistemic for the
ontological? In the terms explored here, are ‘life’ and
‘information’ compatible phenomena, or do they inhabit mutually
exclusive and distinguishable domains, like ‘sound’ and ‘rain’?
Western scientific thought has claimed a large overlap between the
two domains, mostly by reifying its own metaphors: ‘information’
metaphors in life science and ‘life’ metaphors in data science. I
assert that most of this space is a naively conceived mirage, that it
is indeed reasonable to doubt, in the good company of many working
scientists and several generations of committed, humanistic
postmodern thinkers, to doubt that the fragile verities of positivist
science could really pave the way for an artificial life. There are
reasons to doubt (foremost, the central and captive position of
almost all technology developers within oligarchic power structures);
and there are grounds for doubt, as explored in this discourse. The
grounds for doubt we have identified leave adequate wiggle room for
some alternative understandings of ‘life’ and ‘information’,
and for some alternative ways of acting or organizing action as an
ethical response. As examples, I offer two recent movements within
the scientific community:
Arriving first
as a feminist critic of scientific procedure and scientific culture,
Donna Haraway has quietly assembled an alternative science, an
interspecies science, that offers an effective response to our
current multilateral environmental threats. It is the first coherent
post-modern science, and the first truly inclusive science: inclusive
of participants and methods, and inclusive of truly Gaia-scaled
outcomes.
Emerging out of
political struggles, first for sovereignty over land and resources,
and continuing into the sphere of cultural patrimony, native scholars
around the world have asserted the right to define and direct
scientific inquiry. Indigenous science is a significant new
direction, both for the content of the work, and for the way that its
practitioners have recentered the commissioning of scientific study
from within their own cultural sovereignty.
If life and information do not share
a domain, then there rather exists a gulf between them, a gulf that
is a very real barrier, at the level of possibility, to the emergence
of artificial life. Ultimately the gulf between ‘life’ and
‘information’, between all that exists, that which can and
cannot be signified, and all that can be said, the set of all
possible signifiers, is a gulf between the world we find and the
world we make. Humility is urgently called for in the face of this
gulf: the humility of our failure to distinguish between the real and
the constructed; and the humility of accepting the consequences of
our constructs becoming real, having real impacts on lives,
human and non-human lives, on the being of us all. Looking
outward and looking forward, what is surely more urgent than the
potential construction of autonomous machines --“living” machines
-- is to address the actual and current construction of
“un”-autonomous machines, “dead” machines, machines which
depend on us for their purpose and command; machines for which we are
deeply, fundamentally responsible, and which already constitute a
moral hazard of unspeakable proportions. Looking inward and looking
back, we are called to renewed effort in the ethical sphere, to
correct social injustices brought upon us in part by the fantastical
conflation of ‘life’ and ‘information’: impacts of centuries
of scientifically-justified racism, novel forms of slavery and class
distinction, and the abdication of social responsibilities in the
name of faceless, materialistic political philosophies. Perhaps the
dream of autonomous machines will remain in the realm of science
fiction: like all good fictions, may it help us reflect upon human
nature and spur us to act upon our better instincts.