Skip to main content

Joel Ramsey is a partner at Torys LLP and is part of the firm's technology contracting practice.

Silicon Valley's product-development ethos is best encapsulated by Facebook's unofficial motto: "Move fast and break things."

It works for tech companies that need to move a product to market quickly, testing and adjusting as necessary along the way. But this approach can be at odds with the interests of regulators and governments, who prefer to move slowly, test extensively and ensure the products used by Canadians are safe and reliable.

Story continues below advertisement

Now, as software developers and thinkers are focused on artificial intelligence (AI) and we see technologies increasingly move from cloud- or handheld-based apps into the physical world – the interests of developers and regulators are coming even further into conflict.

Nowhere is this better exemplified than in the area of self-driving cars. With development moving rapidly, what are some of the questions regulators must answer before letting them operate on our streets?

The present model of automobile liability is individual-focused, as cars continue to largely be under the control of individual drivers. As self-driving car technology improves, and cars become more and more autonomous, liability is likely to shift to product liability.

This obviously poses risks to manufacturers, but there are other legal implications.

Manufacturers of consumer products have a duty to the end user of their products – a duty of care – to ensure that the products are safe to use and that risks are clearly indicated. But for a self-driving car, which is both a manufactured product and a platform for third-party software, who has duty of care – and to what extent?

There is also a question of whether governments are presently well equipped to gauge whether manufacturers and developers are meeting this duty of care. Governments must invest today in the technologically savvy talent necessary to gauge the safety of the machine-learning algorithms and data-crunching techniques used to guide self-driving cars safely to their destination.

A self-driving car is a software platform in constant communication with central servers as well as the other self-driving cars around it. In order to ensure safe passage to its destination, the car is equipped with multiple sensors, and this sensory data is sent back to central servers. Combined with data from other cars, this allows machine-learning algorithms to safely plot a course to the rider's destination.

Story continues below advertisement

This requires a tremendous amount of data, much of which may be sensitive or may qualify as personal information under existing privacy laws. The totality of the data paints a detailed picture of a passenger's daily activities.

Much like web-traffic data are used today, self-driving car usage history could be used to develop highly detailed user profiles, which would be very useful to firms looking to target advertisements or sell individualized products.

Collection of web-traffic data is legal under the principle of implied consent: By using a website, a user implicitly gives permission for their data to be collected and monetized for reasonable purposes disclosed by the site.

But when self-driving cars become common, courts and regulators will have to answer the question: How can a user meaningfully understand and consent to the collection and use of the extensive data generated by self-driving cars?

Silicon Valley's habit of sending "minimum viable products" (or MVPs) to market allows them to innovate and create at a speed unmatched by any other industry.

But innovation must be matched with a regulatory apparatus that will enable this technology to be applied safety. Not only will this protect consumers, but it will enhance Canada's growing reputation as an incubator of new technology.

Story continues below advertisement

This technology is coming. Regulators must act to get ahead of it, anticipating what structures must be in place to ensure we are able to take advantage of the opportunities AI affords while minimizing the risks.

To "move fast and break things" may work in cyberspace, but not on Canadian streets.

Report an error
Tickers mentioned in this story
Unchecking box will stop auto data updates
Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed.

Read our community guidelines here

Discussion loading ...

Cannabis pro newsletter