Skip to main content

Can Misinformation be prevented?

Especially over the past couple years, we all have noticed a rise in problems with misinformation and the creation of echo chambers. As the US continues to divide into more extreme groups, it has become increasingly important to find effective ways to combat this problem, especially in social media. Generally, the blame is placed on the social media companies for their mysterious algorithms, but is there anything they can do about it?

In a recent research study at MIT, researchers attempted to integrate misinformation filters into a social media platform. They created a facebook-like platform that allowed users to customize their algorithm by selecting people who they trusted to share accurate information, and rate content they see on how accurate it is. They found that the test participants were able to effectively rate posts and customize their feed, although, most interestingly, some did still want to see posts even if they knew they held misinformation: The researchers gave the example of wanting to know what a relative has been reading, to be better prepared when talking to them.

This research explores a much more active form of misinformation deterrent, in providing the users with the agency to contribute to deciding what is and isn’t misinformation. Integrating something similar may help to avoid causing any form of misinformation. One example might be allowing people to rate the accuracy and then view the average accuracy rating other people have given a post, but even this would not completely prevent all forms of misinformation.

But these participants were also not actively trying to seek out and promote misinformation. Part of the effectiveness of misinformation is that while there may be some group who knows it to be fake, it’s presented in such a way that is believable. This makes it easier to spread: if someone a user trusts shares that information, the user will assume it to be true. So by ‘infecting’ larger influencers with misinformation, misinformation could spread even faster through this network.

Misinformation may be an undefeatable beast. Especially when exposed to completely new information, it’s hard to judge what is true and isn’t. Bad actors can easily create stories or fake facts that seem real enough to pass a common sense test, yet have serious consequences (bleach as a vaccine against the coronavirus). So how can people what’s accurate and real?

While the researcher’s platform does go much further to prevent misinformation, it is not a full prevention. While current social media platforms push for interaction (usually by suggesting more and more extreme material), the researcher’s platform instead sought agreement. Agreement as a target for their algorithm does prevent misinformation from coming in, to some extent, but only if the person the algorithm is targeting does not already ingest misinformation. In that case, it makes it even harder to provide counter-examples, as they will not see anything that disagrees with them.

But this points to an even deeper issue: People don’t want to change what they think. Given the chance, people will create echo chambers around themselves and only listen to those who agree with them. So, for a social media platform to prevent the creation of the echo chambers, it must not let the user control everything they see: By suggesting new things, the user’s opinions may eventually be shifted.

However, this is like balancing on a tightrope: Too much stability, and users are at risk of never having any misinformation they have already internalized. Too much variation, and the users risk being exposed to new misinformation. So moving forward, the challenge will be to reduce the risk of misinformation in variation. While the MIT research is a first step in this direction, there is still a long way to go to create a fully robust social media platform.