Technology

Can AI Fix Fake News?

Can AI Fix Fake News?

What's the issue?

People's reasoning capacity is limited by the quality of inputs they consume. If the inputs are bad - the same will be true of people’s thinking and decision-making. 

In the era following the invention of the printing press, there was a temporary decline in the average quality of published content. The result was 150 years of witch hunts and religious wars.

Today, the civilized world is beginning to devolve into quasi-religious conflicts. The primary reason for this devolution is that the quality of information people consume steadily declined from the late 90s onward. Unsurprisingly, people's capacity to reason and make informed choices follows suit.

Why is this happening?

The internet has taken over as the primary medium of information delivery, with even traditional media (television, press) gradually moving online. The vast majority of the information published online is monetized in one of two ways: pay-per-click or pay-per-view. There is no pay-per-quality; or pay-per-pay-truth. Only clicks and views affect the bottom line.

The inevitable result of this incentive structure is that, over time, content evolves to become more optimized for clicks and views. Selective pressure drives evolution. In its evolution, content becomes less optimized for quality, truth, utility, or any other factor that does not contribute to a publisher's survival. 

As content evolves in the direction of clicks and views, it creates selective pressure on internet companies to evolve in the same direction. The companies that emerge victorious are those whose innovation focuses mainly on ways to grab people's attention and make content more addictive. 

This is why many companies who benefited from the internet boom track user engagement as a primary metric of their success. This is true of Google, Youtube, Facebook, Twitter, TikTok, Pinterest, Snapchat, Instagram, and many others. User engagement measures one’s ability to grab people’s attention repeatedly. And clearly, these companies excel at it.

As these attention-grabbing companies succeed, they accumulate disproportionate control over the flow of information. Since the quality of information is already low, these companies feel (or fear backlash for being) responsible for misinforming the public.

To minimize the potential blowback, they develop strategies for filtering content that (in their minds) the public might find objectionable. This results in arbitrary censorship of some viewpoints and not others. This arbitrary censorship lowers the quality of information people consume even further. 

And so, while most of us believe that there should be a tradeoff between information quality on the one hand and free speech on the other, the reality is much worse.

We are stuck with the worst of both worlds - the information ecosystem is simultaneously full of junk and censorship.

Can we reverse these effects?

Before considering which methods can successfully reverse this trend, we should pause and think about the methods that are being tried at present and why they cannot possibly work at scale.

One method that is often brought up in these discussions is fact-checking. The idea behind this approach is that if some people introduce false claims into the ecosystem, then others can review them and conclude that they are false. 

This idea appears to make sense at first. Bad people do bad things; good people fix them.

Alas, not so fast. There are several obvious problems with this approach:

Who watches the watchers? 

Or, to be even more cynical: If the fact-checker status allows people to influence the information ecosystem, wouldn’t the fact-checker profession attract the actors you would not want to trust with this power?

Is truth the right criterion? 

This is a subtle but acute problem. Think of any iconic invention or paradigm-shifting idea in history. Pick your favorite. And now ask yourself - would a fact-checker consider it true when it was first proposed?

Unfortunately, while most contrarian ideas are bad, all truly great ideas are contrarian at first. This is the structure of scientific revolutions. 

Filtering things that seem untrue out of the public square would be an excellent way to end all innovation.

Are there enough fact-checkers, and do enough people pay attention to them?

The amount of information we are producing is growing at an exponential rate. Every social media user can author a post that might go viral. There is no amount of professional fact-checkers we could hire to verify all the claims produced on the internet.

Moreover, suppose that we had an unlimited number of fact-checkers.

Who would see their fact-checks? Do fact-checks get as much distribution as the articles whose claims are verified? Or is it orders of magnitude lower?

Another method often brought up is moderating (or censoring) content to remove misinformation or disinformation. In some countries, the government takes this mission upon itself; in others (like the US), the government typically pressures private companies, e.g., Facebook and Twitter, to moderate content on their platforms.

The problems with this method are even more obvious: Junk is not misinformation or disinformation. It’s just junk.

If the universal incentive across the web is to produce and distribute junk - what would be left in the ecosystem if all the misinformation and disinformation were removed? 

The answer is junk, of course. Just slightly less of it.

Misinformation includes all innovations too

As discussed above,  most contrarian ideas are bad, but all truly great ideas are contrarian. Since all contrarian ideas would fall under the standard definition of misinformation, censoring misinformation (whether done by the government or private companies) would likely end all innovation. 

So, now that we’ve eliminated the things that obviously won’t work let’s consider some things that might. We need something that can be applied to all content in real-time and incentivizes the publisher of the content to produce something other than junk.

In considering how to address information, let’s consider the closest parallel - if the information is nourishment for the brain, our approach to it should likely resemble how we approach other forms of nourishment. So ask yourself, how do you go about consuming less unhealthy foods in your diet? 

In attempting to answer this question, most people employ some (or all) of the following methods:

  1. Shop in grocery stores that carry higher-quality products.
  2. Shop primarily in the perimeter aisles where most unprocessed foods tend to be.
  3. Look at the nutrition labels on each product before deciding whether to buy a particular product or not.

Indeed, these methods can also be applied to brain food. Unrolling the analogy, one is left with the following three approaches:

  1. Consume information on platforms that carry higher-quality content.
  2. Focus primarily on the more informative sections of these platforms.
  3. Look at the nutrition label on each article.

Let’s start with platforms. Outside of the stale world of large social networks that try to compete on user engagement and addictiveness, several new entrants use artificial intelligence to select better content:

Ground.news does a tremendous job at grouping articles on a particular topic and providing a visual representation of the political bias of the sources included in the group. (1)

Allsides.org performs the arduous manual process of aggregating the most important articles of the day and finding three variants of each - one from a left-leaning source, one from a right-leaning source, and one from a neutral source. They do not use artificial intelligence, so their methods do not scale to evaluate all content. Still, selecting a single platform where all content has above-average quality certainly fits the bill. (2)

Otherweb.com (full disclosure: the author is a founder) uses AI models to collect, filter and rank articles. All the models and datasets are open for public inspection so skeptical readers can check for themselves and ensure there is no hidden bias. (3)

Many measurements and dimensions can be included as we, as a community of users, develop a consensus on what people generally consider to be high-quality content.

Given such a consensus, and assuming that, just as with food, a large segment of the population will be willing to invest in the healthy option, we can inevitably create a countervailing force to the one that has been deranging the ecosystem thus far. 

The result will be happier users, of course, but more so - there will be a universal incentive to produce the kind of content readers want. By collectively setting their filters to positions that reflect our long-term preferences (and not the things they happen to click on in the heat of the moment), users can gradually turn their preference for content that is not junk into something with an actual dollar value.

And if they do - AI will have fixed fake news.

Entrepreneurs turn to VENTEUR for the insights necessary to succeed in business. Our mission to empower entrepreneurs has never been more important than it is now. Financial contributions are critical for VENTEUR to continue providing in-depth resources and original journalism for the entrepreneurial community. Please consider making a contribution to VENTEUR today.

Continue Reading