Skip to main content
opinion

Thomas W. Malone is a professor at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His new book is Superminds: The Surprising Power of People and Computers Thinking Together.

What can we do about the rise of fake news? Of course, no single individual can solve a problem like this alone; it requires groups of people – and often computers. One name for these groups is superminds – groups of individuals and computers acting together in ways that often seem intelligent.

There are five types: hierarchies, markets, communities, democracies and ecosystems. Perhaps the most obvious way superminds can deal with fake news is by using the hierarchical organizations of the social-media companies themselves – such as Facebook and Twitter – to police the content on their sites. Most social-media companies today use a combination of computer algorithms and human judgment to eliminate objectionable content from their sites. Interestingly, some companies may have overestimated the capabilities of artificial-intelligence algorithms for this purpose and underestimated the need for human judgment. Facebook, for instance, recently announced that it was planning to hire an additional 10,000 members to its content review team by the end of the year.

But there is a problem with having these sites filter out fake news themselves: These companies are strongly motivated to think about what news items will increase their own long-term profit from advertising, not about what news items are most accurate or most helpful for the broader society. In other words, markets don’t necessarily provide the right incentives to deal with fake news.

A possible way of dealing with this conflict-of-interest problem is to use laws enforced by hierarchical governments. But it’s not obvious how to actually do this in a way that would be acceptable in most modern democratic societies. There are certain specific situations (such as libel and perjury) in which it is illegal to lie, but in general, the law does not require you to tell the truth. And in societies in which freedom of speech and freedom of the media are bedrock principles, it is a slippery slope to try to legislate what people are allowed to say and what they aren’t.

Another possible way to deal with fake news is with a mechanism that is critical in communities: reputations. If you have a good reputation in a community, your views are more credible and influential. When most of us lived in small towns or other similar communities, we knew the reputations of most of the people we dealt with and that helped us know who was credible.

In the world of social media, we are all like people from small towns visiting a big city for the first time. It’s hard to know who to trust. Over time, however, people who live in big cities usually learn pretty well how to tell who is trustworthy and who isn’t. And we now need to develop effective ways of doing that online.

One possibility is to have sites such as Facebook and Twitter display credibility ratings (that is, reputations) for different journalists and publications. Elon Musk recently proposed an intriguing way of determining such ratings using a kind of online democracy. His basic idea was to let online users vote on (that is, give ratings to) the different news sources to gauge their credibility. There are at least two potential problems with this. For starters, readers’ and viewers’ opinions about the credibility of media sources is likely to depend, in part, on their pre-existing biases. For instance, if a country is so polarized that liberals only find liberal journalists credible, and conservatives only find conservative journalists credible, then the ratings on a site such as this wouldn’t be a good gauge of what is true – they would just be an opinion poll of participants’ political preferences. And it would be possible to game the system by encouraging lots of people (or even automated bots) who agree with you to register their opinions on the site.

The second – and much deeper – difficulty concerns how we decide what is true. Just because many people believe something is true doesn’t necessarily mean that it is. History is full of examples of things that many people once believed, but which most people now think are false: The Earth is flat; the sun moves around the Earth; evil spirits cause disease.

Philosophers know that questions of what is true and how we know it are extremely subtle and complex. But for practical purposes, different communities have developed their own methods of determining what is true that go beyond just trusting majority opinion. Scientists, for instance, have developed methods for doing carefully controlled experiments, and mathematicians have honed the art of making rigorous logical arguments.

Fortunately, there are analogous standards for responsible journalism. Journalists, for example, are expected to verify “facts” they hear from one source by corroborating them with other sources. They are expected to identify the sources of information they report and to give subjects of unfavourable news coverage an opportunity to respond. It’s reasonable to assume that these standards lead to journalism that is more accurate and objective. But using these standards to judge journalistic credibility requires knowledge and skill – not to mention time and effort.

Crowdsourcing of the sort Mr. Musk proposes can work well when the knowledge, skill and motivations needed to do a task are widely distributed in the crowd. But it’s probably not reasonable to assume that most members of the general public would be willing and able to apply these journalistic standards effectively.

A promising alternative, however, would be to create independent hierarchical organizations, perhaps non-profits, that make it their business to rate the credibility of news sources based on these criteria. There are already independent fact-checking organizations, such as snopes.com, politifact.com, factcheck.org and hoax-slayer.com, and if some groups such as these can manage to be credible to most people of both political persuasions, then they could have a very substantial and positive effect on media credibility.

The online media companies could then pay these ratings agencies for the right to display their ratings. And I suspect that if broadly trusted ratings of media credibility were actually available, many users would want to see these ratings with their news. Then providing such ratings might well become necessary for the online media companies to be more profitable – not to mention the fact that it could bolster their reputations as responsible corporations.

There is also, however, a pessimistic possibility. If a national dialogue becomes so polarized that no single source of information or ratings is credible across political lines – even with ostensibly independent organizations involved – then none of the methods I discussed here would work very well. Biases, real or imagined, would breed the perception that no media is to be trusted. And then there would not really be a single national community any more. In that case, decisions would likely be made primarily by the last kind of supermind: an ecosystem, in which the law of jungle prevails and decisions are made purely on the basis of who has the most raw power.

That would be a bad omen, indeed, for the future.

Interact with The Globe