Skip to main content

The Globe and Mail

China using AI to censor sensitive topics in online group chats

This photo illustration shows the logo of the app WeChat.

PETER PARKS/AFP/Getty Images

China appears to have massively upgraded its powerful online censorship apparatus, using it to more severely block sensitive topics in group conversations while allowing freer rein in private chats – a sign, one expert says, of a dramatic leap in the use of artificial intelligence to silence speech that falls afoul of the Communist Party.

The growing sophistication of China's Internet blocking, uncovered by researchers at the University of Toronto, offers a window into how the country's authoritarian regulators are growing savvier at choking out what they see as undesirable speech – while limiting the anger their deletions stir both domestically and abroad.

Researchers with U of T's Citizen Lab at the Munk School of Global Affairs studied the app WeChat, the principal artery of digital communication in China and owned by one of the country's largest Internet firms, Tencent Inc. They found evidence of a complex WeChat management system that alters degrees of censorship according to the situation.

Story continues below advertisement

Related: Facebook builds censorship tool to attain China re-entry: report

Virtually no censorship exists for users who chat directly with each other or are registered to use the app with non-Chinese phone numbers.

But group chats, a common form of communication in China among companies, friends, vendors and activists alike, face stricter management. Enter a WeChat group, and messages related to the Tiananmen Square Massacre, some government officials, leaving the Communist Party or the Falun Gong spiritual movement simply disappear.

The blocking can respond quickly to current events, and the system has grown so complex that it will allow standalone mentions of "June 4" – the date of the Tiananmen massacre – but not a message that combines the terms "June 4," "student" and "democracy movement."

For Jack Qiu, a professor at the Chinese University of Hong Kong who has studied mainland communication technology, it's confirmation that Beijing has moved away from simple blacklists of banned terms and is instead using algorithms that can, at lightning speed, censor different things in different places for different people at different times.

"This is a new generation of censorship technologies. And the backbone is an intelligent system – what is commonly known as machine learning," Prof. Qiu said. "So the censorship system can learn by itself."

WeChat users were once warned that messages touching on sensitive topics were being blocked. Now, they simply vanish, without a notice to either the sender or the receiver.

Story continues below advertisement

The heavier restrictions on group conversations suggest one of the censors' primary goals is to keep people from organizing, rather than merely discussing events with friends.

"In a group chat a message has the potential ability to reach 500 users – and so that could also have higher potential for encouraging mobilization," said Masashi Crete-Nishihata, the research manager at Citizen Lab and one of the authors of the new report, "One App, Two Systems: How WeChat uses one censorship policy in China and another internationally."

The researchers tested 26,821 keywords related to sensitive topics such as riots, political disputes, natural and human disasters, human rights, the Falun Gong and the wealth of top officials.

Managing WeChat is among the highest priorities for the Chinese government, since no other app or website matches its importance in communication. WeChat has become the hub for China's digital life, mixing chat with banking, shopping, bill payments, gaming, gift-giving and commerce. It counts more than 800 million users.

Authorities have for years appeared to possess direct access to whatever is communicated through WeChat. Users have cited occasions in which their messages were investigated by police in near real-time, suggesting the system is almost perfectly transparent to the state. When protests break out, those on the streets often find their accounts quickly deactivated or their access to functions like group chats blocked.

China also acts rapidly to delete articles, blog posts and other content that spreads through WeChat, in addition to its iron-fisted control of the country's press.

Story continues below advertisement

And China's technological upgrades extend beyond managing speech. It has also begun to use big data techniques to develop a "social credit" system to measure a person's trustworthiness in the eyes of the state, creating ratings that could profoundly affect the lives of those seen as challenging the party line.

But the new Citizen Lab report also suggests that China, for all its willingness to jail activists and silence critics, is using the system to ratchet back censorship for most users. Of the 26,821 terms tested, only two were censored in private conversation – the words "Falun Gong" in simplified and traditional Chinese characters – and even those were not consistently blocked. At some periods of time, no keywords were censored.

In a separate test of the web browser built into WeChat, the researchers pulled up the one million most popular sites on the Internet. Only 41 were censored for Chinese users. International accounts encountered no blocks. So it seems Chinese companies are building dual systems to conquer the outside world, even as Western firms such as Facebook ready new censorship tools in hopes of entering the Chinese market.

Tencent is "building a bifurcated social media platform that appears differently to different audiences and extends its controls over mainland Chinese users beyond its borders to follow them around wherever they go," said Citizen Lab director Ronald Deibert. "The sophistication such as it is is less technological than it is strategic."

If part of that includes adopting a machine-learning system, Prof. Qiu said, one benefit would lie in its ability to decrease false positives – avoiding a situation where a discussion of breast cancer, for example, is blocked in an attempt to halt pornographic messages about breasts.

"When the system is dumb, you kill too many things," he said.

But in the broader picture, he said, this kind of machine-driven censorship "is more horrifying because the system is more effective. It is more precise, but at the same time it can concentrate. And it's targeted at preventing citizen activism or the formation of a meaningful dissident voice."

Report an error Licensing Options
About the Author
Asia Bureau Chief

Nathan VanderKlippe is the Asia correspondent for The Globe and Mail. He was previously a print and television correspondent in Western Canada based in Calgary, Vancouver and Yellowknife, where he covered the energy industry, aboriginal issues and Canada’s north.He is the recipient of a National Magazine Award and a Best in Business award from the Society of American Business Editors and Writers. More

Comments

The Globe invites you to share your views. Please stay on topic and be respectful to everyone. For more information on our commenting policies and how our community-based moderation works, please read our Community Guidelines and our Terms and Conditions.

Please note that our commenting partner Civil Comments is closing down. As such we will be implementing a new commenting partner in the coming weeks. As of December 20th, 2017 we will be shutting down commenting on all article pages across our site while we do the maintenance and updates. We understand that commenting is important to our audience and hope to have a technical solution in place January 2018.

Discussion loading… ✨