Skip to main content
opinion

Mark Kingwell is a professor of philosophy at the University of Toronto

Garry Kasparov, former chess world champion and current intellectual-at-large, was in Toronto this week to promote his latest work, Deep Thinking. The book's narrative is driven by Mr. Kasparov's era-changing 1997 chess matches with the IBM computer Deep Blue. The carbon/non-carbon opponents split their two six-game contests, but Mr. Kasparov's defeat is all people remember, an anxiety-inducing moment of machine-over-human superiority.

Since then, Mr. Kasparov has retired from competitive chess, become an outspoken critic of Russian President Vladimir Putin, and written extensively on games, culture, politics and technology. He is a gifted speaker, charismatic and engaging, who reliably makes ironic Terminator and Matrix references when discussing the rise of the machines, about which he is sensible and level-headed.

Nevertheless, fear remains the dominant emotion when humans talk about technological change. Are self-driving cars better described as self-crashing? Is the Internet of Things, where we eagerly allow information-stealing algorithms into our rec rooms and kitchens, the end of privacy? Is the Singularity imminent?

But fright is closely seconded by wonder. Your smartphone makes Deep Blue look, as Mr. Kasparov has said, like an alarm clock. In your pocket lies computing power exponentially greater than a Cray supercomputer from the 1970s that occupied an entire room and required an elaborate cooling system. Look at all the things I can do, not to mention dates I can make, while walking heedlessly down the sidewalk! This is familiar terrain. The debate about artificial intelligence is remarkable for not being a debate at all but rather, as with Trump-era politics or the cultural-appropriation issue, a series of conceptual standoffs. Can we get past the typical stalemates and break some new ground on artificial intelligence?

I think we can, and Mr. Kasparov himself makes the first part of the argument. We can program non-human systems, he notes, to do what we already know how to do. Deep Blue won against him using brute force surveys of possible future moves, something human players do less quickly. But when it comes to things we humans don't understand about ourselves, and so can't translate into code, the stakes are different. Intuition, creativity, empathy – these are qualities of the human mind that the mind itself cannot map. To use Julian Jaynes's memorable image, we are like flashlights, illuminating the external world but not the mechanisms by which we perceive it.

Two things now become relevant. The first is that we are getting better at solving this age-old philosophical conundrum. If, for example, neuroscience and MRI scans are not the complete answer, they do begin to illuminate the brain-consciousness relationship. Contrary to what past philosophers argued, consciousness may be explicable.

At the same time, computers are getting smarter. They can self-correct, using neural networks and reinforcement loops to learn things outside their original programming. Another computer, AlphaGo, managed to defeat leading human players of the ancient Chinese game Go, which rewards insight and boldness. If Deep Blue is a bulldozer, AlphaGo is a Formula One racer.

This might sound like the cue for another round of machine-fear versus machine-cheer. But the best voices in the critical literature about technology – Martin Heidegger, Jacques Ellul, Marshall McLuhan – know that the point here is self-understanding, not denunciation. Computers are something that we humans enact and enable. They are more like unruly children than alien visitors. And so we must reflect on what we have wrought, and the ethics of our own complicity.

The reason to be wary of so-called smart appliances is human perfidy and weakness rather than non-human malice. If we surrender our solitude, we impair our ability to maintain a robust public sphere of individual integrity. A camera does not, by itself, watch you; it is the corporation, or the state, or the marketing firm that does that. We have met the enemy and, as so often, he is us.

But the ethics of artificial intelligence do not end there. Knowing right from wrong, weighing rights and responsibilities, are among those things we currently do not know how to program. Conscious non-human entities, supposing we ever encounter them, created by us or not, will demand the same respect as the human kind.

In turn, we will be justified in demanding from them the same duties of respect and care. Human-machine encounters will be, maybe unexpectedly, tutorials in how we ought to treat each other – whoever is other.

Interact with The Globe