Skip to main content
leadership lab

Managing director of AI at Accenture in Canada, host of The AI Effect podcast with Amber Mac, which launches Season 2 on Oct. 23

Artificial intelligence (AI) is bringing amazing changes to the workplace, and it’s raising a perplexing question: Are those robots sexist?

While it may sound strange that AI could be gender-biased, there’s evidence that it’s happening when organizations aren’t taking the right steps.

In the age of #MeToo and the drive to achieve gender parity in the workplace, it’s critical to understand how and why this occurs and to continue to take steps to address the imbalance. At Accenture, a global professional services company, we have set a goal to have a gender-balanced work force by 2025. There is no shortage of examples that demonstrate how a diverse mindset leads to better results, from reports of crash test dummies that are modelled only on male bodies, to extensive academic studies on the performance improvements at firms with higher female representation. We know that diversity makes our business stronger and more innovative – and it is quite simply the right thing to do.

To make sure that AI is working to support this goal, it’s imperative to know how thought leaders, programmers and developers can use AI to fix the problem.

The issue matters because Canadian workplaces still suffer from gender inequality. Analysis by the Canadian Press earlier this year found that none of Canada’s TSX 60 companies listed a woman as its chief executive officer, and two-thirds did not include even one female among their top earners in their latest fiscal year.

Add to this the reports about behaviour in the workplace that undermines the principles of diversity and inclusion. Of course, AI isn’t the cause, but it can perpetuate the problem unless we focus on solutions. AI can contribute to biased behaviour because the knowledge that goes into its algorithm-based technology came from humans. AI “learns” to make decisions and solve complex problems, but the roots of its knowledge come from whatever we teach it.

There are lots of examples showing that what we put into AI can lead to bias:

  • A team of researchers at the University of Washington studied the top 100 Google image search results for 45 professions. Women were generally under-represented in the searches, as compared with representation data from the Bureau of Labor Statistics. The images of women were also frequently more risqué than how a female worker would actually show up for some jobs, such as construction. Finally, at the time, 27 per cent of American CEOs were women, but only 11 per cent of the Google image results for “CEO” were women (not including Barbie).
  • In a study by Microsoft’s Ece Kamar and Stanford University’s Himabindu Lakkaraju, the researchers acknowledged that the Google images system relies on training data, which could lead to blind spots. For instance, an AI algorithm could see photos of black dogs and white and brown cats – but when shown a photo of a white dog, it may mistake it for a cat.
  • An AI research scientist named Margaret Mitchell trained computers to have human-like reactions to sequences of images. A machine saw a house burning to bits. It described the view as “an amazing view” and “spectacular” – seeing only the contrast and bright colours, not the destruction. This came after the computer was shown a sequence of solely positive images, reflecting a limited viewpoint.
  • Late last year, media reported on Google Translate converting the names of occupations from Turkish, a gender-neutral language, to English. The translator-bots decided, among other things, that a doctor must be a “he,” while any nurse had to be “she.”

These examples come from biased training data, where one or more groups may be under-represented or not represented at all. It’s a problem that can exacerbate gender bias when AI is used for hiring and human resources. Statistical biases can also exist in areas including forecasting, reporting and selection.

The bias can come from inadequate labelling of the populations within the data for example, there were too few white dogs represented in the database of the machine looking at dogs and cats. Or it can come from machines working with variables that are highly co-related but rely too much on certain types of data; for example, weeding out job candidates because their address is from a women’s dorm on campus, without realizing it was keeping out female applicants.

Gender bias can also come from poor human judgment in what information goes into AI and its algorithms. For example, a job search algorithm may be told by its programmers to concentrate on graduates from certain programs in particular geographic locations, which happen to have few women enrolled.

Ironically, one of the best ways to fix AI gender bias involves deploying AI.

The first step is to use analytics to identify gender bias in AI. A Boston-based firm called Palatine Analytics ran an AI-based study looking at performance reviews at five companies. At first the study found that men and women were equally likely to meet their work goals. A deeper, AI-based analysis found that when men were reviewing other men, they gave them higher scores than they gave to women – which was leading to women getting promoted less frequently than men. Traditional analytics looked only at the scores, while the AI-based research helped analyze who was giving out the marks.

A second method to weed out gender bias is to develop algorithms that can hunt it down. Scientists at Boston University have been working with Microsoft on a concept called word embeddings – sets of data that serve as a kind of computer dictionary used by AI programs. They’ve combed through hundreds of billions of words from public data, keeping legitimate correlations (man is to king as woman is to queen) and altering ones that are biased (man is to computer programmer as woman is to homemaker), to create an unbiased public data set.

The third step is to design software that can root out bias in AI decision-making. Accenture has created an AI Fairness Tool, which looks for patterns in data that feed into its machines, and then tests and retests the algorithms to root out bias. This includes the subtle forms that humans might not see too easily to ensure people are being fairly tested. For example, one startup called Knockri uses video analytics and AI to screen job candidates; another, Textio, has a database of some 240 million job posts, to which it applies AI to root out biased terms.

AI and gender bias may seem like a problem, but it comes with its own solution. It’s our future – developing and deploying the technology properly can take us from #MeToo to a better hashtag: #GettingToEqual.

We’ve launched a new weekly Careers newsletter. Sign up today.

Interact with The Globe