Skip to main content
opinion

Social-media platforms appear to be having an amorality contest, and this week it was YouTube’s turn to shrug at the harm that it’s caused.

On Monday, the New York Times reported that the platform failed to protect children from people who sexualize them, even though it has known about the problem for months. When prompted with a search for erotic videos, YouTube’s recommendation algorithm is still serving up images of increasingly young children doing what should be innocuous, such as playing in swimsuits or doing gymnastics.

The next day, Vox journalist Carlos Maza received a reply to his complaints about being targeted by a YouTube vlogger who he said had spent years aiming homophobic, racist and hateful insults at him. The vlogger has almost 4 million subscribers, some of whom allegedly targeted Mr. Maza across multiple platforms and in his personal inbox with death threats and threats to release his personal information online.

Even so, replied YouTube, “while we found language that was clearly hurtful, the videos as posted don’t violate our policies.” Which is confusing, since those policies advise users not to post content that “makes hurtful and negative personal comments/videos about another person” or that “incites others to harass or threaten individuals on or off YouTube.”

Every major social-media platform – Twitter, Facebook, Reddit – has played a part in creating this age of disinformation and extremism. But unlike the other platforms, YouTube shares the ad money it makes with content creators: Tech journalist Julia Carrie Wong argues that it’s effectively their employer, whether it accepts that title or not. That means the platform is directly delivering rewards to its creators, including those who propagate prejudice, creepiness and lies. In fact, it even helps them spread their message.

Some inside the company have tried to solve the issue. In April, Bloomberg published a story for which it interviewed “scores of employees” who said they had long known that the site’s recommendation algorithm was leading people toward “false, incendiary and toxic content.”

But senior executives, including chief executive officer Susan Wojcicki, seem to be so focused on the advertising money that YouTube’s audience brings in that they ignore the well-being of those same users. They dismissed these warnings, along with suggestions of how to counter the problem. The site’s growth depends on “engagement,” after all – the raw amount of time people stare at the screen. And what keeps them there is a recommendation engine that pushes out increasingly extreme or explicit content.

At the 2018 South by Southwest conference, Bloomberg reported, Ms. Wojcicki defended the problematic content YouTube hosts by comparing the platform to a library. “There have always been controversies, if you look back at libraries,” she said.

But YouTube isn’t a bookshelf. It’s a billion-dollar bookseller, promoting some of the hundreds of millions of stories in its possession over others. Its algorithm doesn’t ignore, or even bury, the factless ramblings of vaccine-science deniers (including at least one in Montreal, a city now seeing an uptick in measles cases). No, it lifts them out of its infinite catalogue and thrusts them out into the world, with the book cover facing out and an “Audience Favourite” sticker slapped on the front.

Revelations of this kind of social-media irresponsibility now lead, reliably, to a certain kind of reaction: the patchwork, flip-flopping, half-measure responses that platforms think will fool us into believing they care. After learning that pedophiles were using comment sections to try to goad children into exploiting themselves, YouTube took comments off of some, but not all, videos featuring children. When Mr. Maza’s situation led to a huge outcry, YouTube “demonetized” the vlogger in question, cutting off his access to ad revenue without a clear explanation about why it was changing its decision, or when and how the revenue might be reinstituted. The criticism continues, as does the company’s inadequate solutions; now YouTube is demonetizing or removing creators it deems extremist entirely, interfering with documentary makers and researchers in the process, and putting itself at risk of being criticized for interfering with free speech.

Free speech is a political issue. Free amplification, though, is a business decision that YouTube is actively making. Which is why the one response that insiders, observers and experts have long advocated continues to be ignored: designing a new, more ethical recommendation algorithm that doesn’t reward repugnant behaviour.

Doing so would reduce traffic, and therefore revenue, for creators, a spokesperson told the Times this week. Somehow, though, she didn’t get around to pointing out that the bulk of that money ends up with YouTube.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe