Skip to main content

Facebook is increasingly relying on a mix of humans, artificial intelligence [AI] and automated computer programs to recognize certain words and block them, thus attempting to stop the spread of hate speech, among other things, on its site.PHOTO ILLUSTRATION: THE GLOBE AND MAIL

Last month, Kate Hansen woke up in her home on Vancouver Island and started going about her routine, which included a check-in with Facebook. She was stumped to find that she could no longer post anything or read any messages.

Hansen, 40, had a pretty good idea that her account suspension was because of one of two posts. The first was related to a comment expressing her opposition to Facebook recently suspending the accounts of other lesbians for using the word "dyke" on the social network. The second was a comment she made in the closed, members-only Facebook group, Radical Lesbian Feminists, about trying to gather a group of women to travel from Vancouver Island to Vancouver for that city's annual Pride event. "Our own dyke march," she wrote.

She hadn't meant anything negative or derogatory by either comment, yet it would take another 10 days before Hansen learned that the second instance had caught the attention of someone – or some computer program – at the social-media giant: "We removed the post … because it doesn't follow the Facebook Community Standards," the notification said.

Facebook, now with about two-billion users worldwide, is under increasing pressure to not only prevent killings, suicides and criminal and terrorist activity from happening on its platform, but also to police millions of posts each day. The company is increasingly relying on a mix of humans, artificial intelligence [AI] and automated computer programs to recognize certain words and block them, thus attempting to stop the spread of hate speech, among other things, on its site. In Hansen's case – and for plenty of others – the social-media giant got it wrong.

In recent weeks, the word "dyke," in particular, has become a symbol of what happens when the protections that are in place to filter hate speech go awry. There have been multiple instances of Facebook users posting the word in a non-derogatory context, but, nonetheless, being flagged and subjected to restrictions on their accounts. A lesbian motorcycle club in Queensland, Australia, called Dykes on Bikes, uses the title as its trade mark, but has had to settle for spelling it "Dy kes on Bikes." In the United States, partners Liz Waterhouse and Lisa Mallett sent Facebook a 51-page document on July 5 that included multiple testimonies and screen shots of accounts where women have been banned for using "dyke" in an empowering, self-referential way. The pair, who are documenting their fight with Facebook through their blog listening2lesbians.com, also began an online petition that garnered more than 7,200 signatures. Part of the petition called on Facebook to carry out an investigation into the "dyke" related suspensions to rule out whether or not a Facebook employee was responsible for possible discrimination.

"We demand that Facebook determine if any of their employees responsible for judging user content are showing a bias against women and lesbians," Waterhouse and Mallett said in the document they presented Facebook. "We call on Facebook to terminate the employment of any individual that has intentionally targeted women and lesbians for their beliefs." (On their blog, Waterhouse and Mallett said a Facebook spokesperson told them the company is investigating the incidents and whether any content reviewers need retraining. They described their conversation with the Facebook spokesperson as "productive.")

When Hansen realized she was suspended, she reported the error to Facebook through their appeals process, but didn't hear back from the company. It wasn't until The Globe and Mail contacted Facebook that it looked into Hansen's complaint. After 18 days, it lifted her suspension.

A Facebook spokesperson in Toronto acknowledged that the word "dyke," in particular, has been an issue in recent weeks.

When asked directly about Hansen's case, the spokesperson said it was an error caused by a mix of human and AI judgment.

"The removal of content involving the word 'dyke' is due to a misunderstanding of context by both our automated system and community operations team," the spokesperson said, adding that it is sometimes difficult for content reviewers to understand when a term is being used in a self-referential way.

Of the millions of reports each week flagging potentially offensive content, many are sifted through by automation. Some of that content can end up being reviewed by one of thousands of Facebook community operations team members located around the world.

These employees review millions of reports 24 hours a day, seven days a week in 40 languages, the spokesperson said. As a result, mistakes are sometimes made and content is sometimes removed in error, resulting in wrongful account suspensions.

Sarah Roberts, assistant professor of information studies at the University of California, Los Angeles, says content moderation and getting it right is the biggest problem facing Facebook since its launch in 2004.

Roberts, who specializes in digital ethics, governance and content moderation, says Facebook employees are struggling to keep up with the volume of user content.

"Workers are often under extreme productivity metrics that quite literally only allow them 10 seconds to deal with a particular case," Roberts said. "So are they going to have time to thoughtfully review and look and see if a given person is using a word like [dyke] self-referentially or if they are using it as a slur?"

But cultural nuance also plays a role, she says, adding that "local meaning of words becomes difficult when somebody in another part of the world may be adjudicating content and may not have that sensitivity."

"Facebook has struggled with, and I feel continues to struggle with, how to accommodate individuals who have reason to want to self-identify in ways that challenge social norms and other kinds of norms," she says.

One way the company is trying to improve its ability to police hate speech is through technology. At the end of 2016, Reuters reported that Facebook's director of applied machine learning, Joaquin Candela, said the company would increasingly be using "an algorithm that detects nudity, violence or any of the things that are not according to [Facebook] policies." The company is employing image-recognition technology used to scan pictures and videos to help it block content that could be considered revenge porn or exploitative to children.

Graeme Hirst, professor with the University of Toronto's department of computer science, agrees with the notion that AI is still far behind a human's ability to detect context.

People who do the screening "don't look at everything that is posted on Facebook, that would be impossible," said Hirst, whose expertise is in natural language processing and computational linguistics.

"One easy way to do this is to simply flag all the bad words, and [words such as dyke] and then send them off to a human to resolve the issue and maybe a human made a mistake. If we hypothesize that, the computer isn't expected to take context into account, that's the human's job," he says.

Whether this was a slip up of AI or a Facebook employee – or both – is besides the point for Hansen. This is the second time she has had her Facebook account privileges wrongfully suspended (the first was in 2010 after posting paintings of women breastfeeding). She's now considering moving her personal content and conversations to other platforms to avoid getting improperly banned a third time.

"Anything I want to keep I want to move somewhere else because I can't guarantee or rely on [Facebook] to keep my stuff safe."

Interact with The Globe