Skip to main content

In Her, the hero (Joaquin Phoenix) falls in love with his smartphone system, and enters a strange new emotional landscape.Courtesy of Warner Bros.

In Her, the new Spike Jonze film, the hero falls madly in love with Samatha – his smartphone's Siri-like operating system.

"I've never loved anyone the way I love you," he tells her as their conversations grow deeper and more intimate.

The lead, played by Joaquin Phoenix, isn't crazy; he's divorced, depressed and lonely – susceptible to his smartphone's charms, huskily voiced by Scarlett Johansson. And movie critics have applauded Mr. Jonze for somehow making the implausible wholly credible.

But perhaps the premise is not so unlikely, after all. Thomas Wells, a postdoctoral researcher at Rotterdam's Erasmus Institute for Philosophy and Economics, thinks that robots may one day become more attractive than humans as companions. Writing for the website 3Quarks Daily, Mr. Wells says robots could be programmed to "allow human owners to pretend that they are loved. And everyone wants to be loved."

Like Samantha, the humanoid lover could inquire about your day, agree with your every opinion, remember your preferences, cook fabulous meals and never complain. "Actual humans can't keep up this level of worshipful attention," Mr. Wells contends. "It requires a degree of self-abnegation incompatible with maintaining one's own individuality … Humans want good lovers, but humans make bad lovers."

Hollywood, of course, often anticipates the zeitgeist. In addition to Her, there is Almost Human, a new TV drama on Fox set in the year 2049. It remixes the traditional cop formula, forging a partnership between a human and a custom-made android. The tag team not only has to kill the bad guys, but – this is the novelty – learn to navigate the expanding landscape of man-robot relations.

What these entertainments foreshadow is a looming tectonic shift: in complexity, range of motor and linguistic skills, and applications, the next generation of artificial intelligence will fundamentally transform society. Embedded in smartphones, computers and humanoid robots, its arrival will have profound consequences, economic, legal, ethical and psychological. And we aren't remotely ready for it.

The bot invasion

In dozens of industries, of course – car production, electronics manufacturing and packaging, among others – robots have begun to replace workers. Semi-autonomous robots are learning to spray pesticides, prune trees, fight fires, load and unload cargo, kill bacteria in hospitals, wash windows, install solar arrays, and morph into pack animals capable of carting almost 200 kilograms. In fact, there will soon be few human skills that robots cannot perform, more cheaply and more efficiently.

Robots are also on the rise, exponentially, in warfare: as bomb-disposal genies, surveillance and weapon-firing drones, mobile shields for SWAT teams and, soon, entire 'bot platoons of infantry, making living, breathing combat soldiers obsolete.

But within the next 15 years, experts say, robots will increasingly begin to populate a new domain – the physical realm, particularly tied to getting and spending.

In a restaurant in Harbin, China, for example, 20 life-size robots cook and serve meals. In Austin, Tex., the Briggo company has installed a robotic barista that grinds and brews coffees to customers' specifications. (To avoid lineups, you can pre-order via cellphone.) Husky Oil is working with Sweden's Fuelmatics Systems to develop a robot that pumps gas into your car with a phone call. (Think of that convenience in a Canadian winter.) And if Google chief executive officer Sergey Brin gets his way, the corporate leviathan will have a self-driving car on the road by 2018. Mercedes-Benz won't be far behind.

One day soon, fleets of fully articulated robots will also begin to roam our habitat: shopping malls, parks, offices, sporting venues, grocery stores, hotels, museums and airports. Equipped with facial recognition software, they will instantly synch our smiles with our Internet profiles – everything we've ever divulged online. Then, they (or their unseen operator) will use that info to guide us to the right store, aisle, product or wicket. Fill out a short questionnaire on their LED screen and you'll get an instant discount coupon – the classic surrender of privacy for tangible gain.

If that seems exciting, think again, says Illah Nourbakhsh, professor of robotics at Carnegie Mellon University. Consumers, he says, could begin to feel a little robotic themselves, mere data banks to be mined and manipulated into behaviour patterns that fulfill corporate needs. "Puppets for advertisers," is how he puts it in his new book, Robot Futures.

That ironic inversion – humans transformed into a malleable underclass – isn't far removed from the misanthropic prophecy of the late Claude Shannon, father of modern information theory. "I visualize a time when we will be to robots what dogs are to humans," Mr. Shannon said in Omni magazine in 1987. "And I am rooting for the machines."

In the supermarket, we may not be certain whether the friendly robot we encounter is fully or semi-autonomous, or controlled remotely in real-time by a human being. Even on the phone, Mr. Nourbakhsh explains, robotic interlocutors will pose problems. "Making flight reservations, calling cabs, booking hotel rooms, we won't be sure if we're taking to a real person or a robot."

If machines deliver what we need, perhaps it does not matter if our interaction is with a robot or a human being. But what if they fall short, Nourbakhsh asks? What if, in the interests of profit, their corporate owners cut corners on reliability and efficiency? How will we react?

And when things go wrong, the litigation lawyers will surely descend. Where does a robot's agency end and ours begin? If a robot-driven vehicle accidentally hits a child playing in the street, who is ultimately responsible? The car's owner, possibly in the back seat watching movies on an iPad? Or the manufacturer? If a caretaker 'bot fatally doubles the dosage of your mother's blood pressure medicine, who is culpable? The nursing home proprietor? The software designer? The hardware engineer?

This pending collision of the authentic and the cybernetic is a potential quagmire. "Our moral future," Mr. Nourbakhsh ventures, "will be tested by ... robot-human relations."

A robot hitchhiker

If you happen to be driving west from Halifax next spring, you may encounter an experiment designed to examine precisely these questions: HitchBot, a humanoid hitchhiker, in Wellington boots, about the size of a large garbage can. Look carefully and you'll see that HitchBot sports a smiling face – a light-emitting diode – inviting you to help him reach his destination: Vancouver. HitchBot aims to be the first robot to hitchhike ad mare usque ad mare. And whether he makes it or not will be entirely up to us.

Hitchbot is the brainchild of David Harris Smith, who teaches communications and multimedia at McMaster University in Hamilton. "I like the idea of releasing a robot into the world and seeing what happens, like Voyageur 2," Prof. Smith says. "We can still communicate with it, but it will be carried along on the tide of human interaction. It will respond to text messages. It can make phones call. It can look up topics on Wikipedia and use that information to 'converse.' It can post to Twitter and FaceBook. We are working on adding speech recognition."

As Prof. Smith envisages it, someone will pick up Hitchbot, plug it into the car's lighter for power, and start communicating. Perhaps it will be taken home for dinner or out for coffee, and then dropped at roadside to continue its journey. "I'm willing to take the risk that somebody might put it in the middle of the road and run it over," Prof. Smith adds. Canadians will help write HitchBot's story.

The design of HitchBot – indeed, of any robot – is tricky. As Prof. Smith's partner, Frauke Zeller, a visiting professor of communication at Ryerson University, notes, "We need to avoid what roboticists call the uncanny valley. If the robot looks too human-like and you touch it and see that it's just machine, that's 'uncanny,' and you won't want to interact. On the other hand, it needs some human features – eyes and mouth – to build trust between human being and robot."

It may need more than that. British researchers have developed a robot prototype that expresses emotions – fear, happiness, pride, anger, excitement and sadness – and reacts if its owners ignore it or fail to provide succour.

Paro, the cuddly Japanese robotic seal widely used in nursing homes, responds physically to its name or to being caressed. In time, seniors develop a deep emotional bond to it.

Within a decade, experts say, private home-care robots, closer to life-size, will converse, clean, cook meals, and read you bedtime stories. That's the up side, but there is a potential down. "We will need to trust these robots," Prof. Zeller says, "and give them information about ourselves, [such as medication dosages] despite the very real risks of malfunction."

To that end, most experiments with bots are today focused on creating humanoids, the assumption being that we will cede trust and engage more actively with robots that look and behave more like us. But when they do, will we begin to lavish more affection on our companion robots than on former friends and relatives? And, to the extent that genuine attachment forms, will a robot's mechanical flaw deliver emotional hurt and possible heartbreak?

The late science fiction writer Isaac Asimov famously posited three robot laws – that they could not harm human beings through action or inaction; must obey all human orders, except those conflicting with the first law; and must protect their own existence, unless it conflicted with the first two laws.

But trust goes two ways. If robots are operated by rules to make us comfortable and safe, what rules will govern human treatment of increasingly sentient robots? You might treat your own android with the same civility you'd show to a human being, but will your neighbours, or strangers? Will robots need a bill of rights to protect them from abuse? And what rights would those be?

The emergent world of social robots, cautions Prof. Smith, will constantly force consumers into new balancing acts, adjudicating between costs (in lost privacy) and benefits (convenience), between trust and distrust. Although many forecast a dystopian future, a dark outcome, he insists, isn't pre-ordained. The robot as avatar, or human proxy, may allow us to savour experiences otherwise unavailable or too dangerous – climbing mountains, deep-sea diving, exploring rain forests, even going to war or, yes, falling in love.

While science labs in Japan, Germany and elsewhere are racing to build prototypes, though, the social dialogue about laws and ethics has barely begun. The time to begin this discussion, warns Mr. Nourbakhsh, is now. "Without serious discourse and explicit policy changes," he writes, we'll create a more polarized economic world, "with robotic technologies replacing the middle class and further distancing society from authentic opportunity and economic justice."

"We should not forget," Prof. Zeller adds, "that agency remains with us, the human beings who will build, program and manage these robots."

These issues may seem too distant to grapple with seriously. But robotic technology is moving rapidly. As the dreamy Samantha tells her besotted lover in Her, "basically, I'm evolving … just like you."

Interact with The Globe