Gary Marcus has predicted that within the next two to three decades we would enter an era “in which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?”
In this scenario, a variation of the trolley car problem, the computer operating the car would need to make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being called upon to make such a decision, it would be considered a choice of moral consequence. Consequently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”
Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?
There is an interesting sense, though, in which an autonomous machine of the sort envisioned in these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the scenarios proposed by Marcus, an action that causes harm would be the result of a properly functioning machine and it would have not been the result of direct human action. The machines decided to take an action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might rightly be called the first truly malfunctioning machines.
There is little chance that our world will not one day be widely populated by autonomous machines of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus should inform such “moral machines,” is problematic enough. But there is another, more subtle danger that should concern us.
Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.
We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perception, understanding, creativity, identity, and more.
I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.
Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?
This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.
In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who beheld Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails nothing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.’”
Were we to conform our conscience to the “conscience” of our future machines, we would cease to shed such tears, and our humanity lies in Solon’s tears.