Social Media: Deciphering Misinformation and Disinformation

By Prof. Steven A. Levine
Campus News

If there is no such thing as bad press, Facebook must be thriving more than ever with the ongoing debate regarding any regulation to what can and cannot be shared on the popular social media site. While there is a very fine line between regulating a privately owned social media company and government intervention in the (oh so loved and fiercely protected) freedom of speech (especially) in the United States, doesn’t the gigantic conglomerate “Meta” have an obligation to keep its platforms clean and at least to some extend truthful?
Under former President Trump we saw a huge uptick in the usage of words such as “fake news” and “alternative facts”, but we must ask ourselves – is there even such a thing as multiple versions of a truth?

As per Merriam Webster, misinformation is defined as “false information that is spread, regardless of intent to mislead.” Disinformation, a subset of misinformation, is more generally described to mean “deliberately misleading or biased information; manipulated narrative or facts; propaganda.” Disinformation is meant to be deliberately deceptive.
One of the best examples to explain misinformation is the (what feels like) age old claim by non-scientists on social media that 5G radio waves can cause autism and cancer in users. This has most thoroughly been debunked by many scientists since the claim first surfaced and yet, people believe it to this day. There is no intent to deceive anyone in this claim, it is just incorrect information. (Business Insider)

Read our latest issue! Fast, free, anonymous! Click above!

A perfect example of disinformation is the propaganda that was spread on Facebook during the 2016 US election. With the help of Facebook ads, groups, and pages, the voting population of the United States was fed outright lies about Hillary Clinton to sow division and interfere with the election proceedings. It has since been proven that the propaganda can be traced back to (largely) Russian actors. Rational people and/or voters from the left, may to this day be wondering who believed in “Pizzagate” anyway, but yet, this piece of disinformation cost Secretary Clinton her win in the election and Stephen Colbert blessed the whole world with three seasons of “Our Cartoon President.”

Circling back around to the question if Facebook should be regulated regarding the spread of mis- and/or disinformation, it is safe to say that the company clearly cannot be trusted with imposing regulations on their own that at least minimizes the impact social media has on society. A 2020 Forbes article put this very fittingly: “[…] they are platforms for free speech and assume no responsibility for what their users communicate.” From a business point of views, this is understandable, because “censoring” certain users can be expensive to implement and makes it harder for the company to expand their user base, but that does not mean, we should not demand Facebook be regulated regardless. Time Magazine calls such a demand for legal action to make the algorithm safer “legally challenging” because of the First Amendment and claims the only way to regulate large social media conglomerates, such as, Facebook at this point is to break up the company or focusing on changing the algorithm that keeps Facebook running.

Rupa Mahanti, an author, and Information Management Consultant once wrote “Information walks. Misinformation flies,” in her book “Data Humour,” and we have seen proof of this statement on numerous occasions since social media has become a largely normal part of everyday life for millions of people. While mis- and/or disinformation have been around since the dark ages (cue: comment about WWII), it is easy to prove that Facebook and other social media platforms are largely at fault for making falsehoods and propaganda spread at a much faster rate and to many more people than before. So yes, Facebook should be regulated, to what extend remains a question that is hard to answer. Obviously, we want people to have the opportunity to develop their own thoughts and opinions, but not at the cost of the safety and security of their very neighbors (metaphorically speaking). At this point Facebook has implemented AI that can factcheck controversial topics and puts a disclaimer under a post to point out that the information shared may be untrue, but is that enough? The most effective way to regulate Facebook/Meta would be to revamp Section 230 of the United States Communications Decency Act of 1996, which currently states that social media networks are not personally responsible for the content that is being shared on their platforms. Only with strong legal action and the threat of corporate liability can regulations happen for a company as big as Facebook.

In the end maybe all this goes much further than just regulating “Meta,” maybe society should focus on teaching children at an early age not to believe everything they read on the internet, because as Margaret Mead, an American cultural anthropologist, said so fittingly, “Children must be taught how to think, not what to think.” Responsible children become responsible adults after all, or so we can hope.

Steven A. Levine is an Assistant Professor in the Accounting and Business Department at Nassau Community College in Garden City, New York.

Facebook Comments

About the author

Leave a Reply

Your email address will not be published. Required fields are marked *