Skip to main content
artificial intelligence

Max Tegmark, best-selling author of Life 3.0: Being Human in the Age of Artificial Intelligence, speaks at the Beyond Impact University of Waterloo Innovation Summit on Friday.Glenn Lowson

The proponents of artificial intelligence make it clear that as the technology speeds ahead, a much larger dialogue outside of the immediate community of technologists, engineers and tech theorists must take place, so that all of society can reap its apparent benefits.

But, how?

How can we assure that wide swaths of society will be heard? Who is to determine the path that artificial intelligence, or AI, and machine learning ultimately will take? When proponents ask such major questions publicly, they inevitably agree that the course of AI shouldn't be set simply by those who directly profit from it (or the machines themselves!), but that the course of AI should be set by all of society.

But, again, how? The need for an answer is now very much upon us, AI proponents say.

"What will the role of humans be, if machines can do everything better and cheaper than us?" asked Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and the author of Life 3.0: Being Human in the Age of Artificial Intelligence. He was speaking at the Beyond Impact summit on artificial intelligence Friday at The Globe and Mail in Toronto, presented in conjunction with the University of Waterloo.

The assumption in such questions is that artificial intelligence is trying to progress to AGI, or artificial general intelligence, in which a machine will basically think a thought, or at least do an intellectual task on its own, as a human can.

Some believe we may never reach true AGI, Dr. Tegmark noted. Machines may never have the consciousness of a living entity or show true creativity. Yet, "the future development of AI might go faster than typical human development, and there is a very controversial possibility of an intelligence explosion, where self-improving AI might rapidly leave human intelligence far behind," he said.

And so, many researchers believe that AGI may be possible in a matter of decades, "but this really begs the question, and then what?" Dr. Tegmark said. Being complacent is a cop out, he argued. "I think we should be more ambitious. I think we should ask ourselves, 'What kind of inspiring high-tech future would we like to have?' And then steer toward that."

He suggested that we are in a race of both the growing power of technology and the wisdom with which we manage it. But in an age of powerful and potentially devastating technology, such as nuclear weapons and AI, the strategy must not be to implement new technology and then learn from our mistakes, he argued.

"Learning from our mistakes is a ridiculous strategy. It's much better to be proactive, rather than reactive," Dr. Tegmark said. "We should start with kindergarten ethics, that we all agree about, and start figuring out how we can put that in our machines." In other words, we need to input a kind of basic ethical goodness into software and machines, allowing them to build upon those ethics as they learn.

Pearl Sullivan, dean of engineering at the University of Waterloo, also noted that digital and machine intelligence is galloping faster than the human response to AI. She cited a report by MIT and Boston Consulting Group which found that among 3,000 executives, 85 per cent believed that AI would provide a competitive advantage, but only 20 per cent had extensively implemented AI into their business. Less than 39 per cent had any AI strategy at all.

"Questions are more important than answers," she said. "Successful implementation of AI will require what I call AI translators," in other words, putting machine knowledge and the information fed to them into proper context. "In my view, Canada's long-term AI leadership will depend on how we create a deep and diverse pool of translator talent."

Steve Irvine, a former executive at Facebook and now chief executive of Toronto-based Integrate.ai, said that because software runs the world's data, and because AI is fundamentally software, the idea now is to evolve the software's function from doing a task to dealing with probabilities, or making predictions from ambiguous situations. The technology can then help people make judgment calls based on a better understanding of the probability of certain outcomes, Mr. Irvine said.

Yet, again, it all comes back to the human questions. "There is an equal amount of more humanities-based, philosophical questions that need to be answered." He noted, for instance, that the debate over "fake news" arguably isn't so much a problem of algorithms, although they might exacerbate the problem, but a debate of human judgment over what is true and fair. It's therefore mandatory for people to participate in the conversation on how technology is being used.

"It's not a technological voice that we need. It's a voice of common sense, rationality, more human perspective," he argued.

The technology can have unintended biases, such as facial recognition software that works best on white men, because the data used to develop the technology mainly consisted of white men. "And so my concern is that these biases and issues are going to creep into some of these systems," added Kate Larson, professor at University of Waterloo's Cheriton School of Computer Science.

And so, the input and inclusion of wider society is essential. "We need people who can think hard and carefully about what should our preferences be," she said. It should not be up to the technologists solely to decide.

Interact with The Globe