BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Social Media Claims Increased Monitoring Of Election Content

This article is more than 4 years old.


With the 2020 presidential election just months away, attention has turned to the role of social media in the spreading of misinformation and propaganda. How far should Facebook, YouTube, and Twitter go to put a halt to false information?

Facebook has given approval for political campaigns to hire social media “influencers” to promote their candidates. This is called “branded content.” It’s an ad without looking exactly like an ad. Michael Bloomberg’s campaign had Instagram influencers post endorsements. Instagram is owned by Facebook. Before Bloomberg’s influencer posts, there was no rule on Facebook regulating branded content. And with Bloomberg spending $7 million dollars a day on his campaign, the amount of branded content he might achieve is staggering.

These branded content posts won’t be found in Facebook’s listing of political ads. This listing allows journalists and investigative watchdog groups to view political campaign advertising content. Therefore, they may get posted without being monitored.

The more lenient the rules are for posting political content, the more social media is open to propaganda from foreign governments and other sources. In 2016, Facebook was infiltrated by Russian accounts, sowing discord amongst members in an attempt to sway the presidential election. Facebook’s response was to require campaigns to give their US mailing address and state how much they spent on each ad.

YouTube announced a ban of content that has been manipulated in “a way that misleads users...and may pose a risk of egregious harm.” However, they will still allow video clips taken out of context. They are also banning false information about the voting process, with an example of viewers being given an incorrect voting date. In addition, they are banning videos that claim that a candidate is not eligible for office due to false information on his or her citizenship — so called “birther” content. YouTube states that their Intelligence Desk was formed in 2018 to pinpoint inappropriate and harmful content, and also work with Google’s Threat Analysis Group. YouTube is owned by Google.

YouTube is not only banning inappropriate and false content, but also is promoting vetted news sources. They are also providing “information panels” on news videos where the source is funded by the government or by the public. The panel will link to a Wikipedia page for the source.

Twitter is considering a labeling system for tweets containing lies by politicians and others in the public arena. The proposed system puts an orange box under a tweet that denotes if a tweet is “harmfully misleading.” Twitter views this process as “community reporting,” as verified journalists and fact-checkers would be reporting mistruths. Note that they are not proposing a ban on these misleading tweets. The tweets that Twitter has proposed to take down are ones that “impact public safety or cause serious harm.” It’s the information that deals in half-truths or “truthiness,” to quote Stephen Colbert that can cause the most damage. It walks a purposely-constructed fine line between believable and unbelievable. Those tweets would stay up, with a warning if enough people report it as being misleading. Twitter has not stated if and when they would be using this warning system.

One possible contender for this warning label would be Bloomberg’s tweet of an edited video of the Democratic debate. It misleadingly appears through editing that the candidates paused an extraordinarily long time, 23 seconds, after Bloomberg asked them, “I think I’m the only one here that’s ever started a business, is that fair?” Some of the candidates were edited to show their mouths hanging open. While some may argue that you can tell the video was edited, some may not be able to tell — and that is the point of posting it.

Another tweet that may qualify for the label is a video of Speaker Nancy Pelosi that was purposely slowed down and edited in order to depict her slurring and stumbling over her words. The video was slowed down Rudy Guiliani, President Trump’s personal attorney, tweeted the video with the quote, “What is wrong with Nancy Pelosi?” It was then deleted. Then Trump tweeted the video with the quote, “Pelosi stammers through news conference,” written all in caps.

Then there is the question of what to do with parody and satire posts. Where is the line drawn between propaganda and obvious satirical content? Some might view Bloomberg’s tweet as a parody. Others would view it as misleading. Others might view it as truth.

Then there is “deep fake” technology, which makes it virtually impossible for the general public to tell if a video is doctored. Even more of a reason why social media needs to catch up with the amount of misinformation posted online. And with the amount of money that social media companies have made selling advertising and users’ information, they need to be held responsible for their content.

Follow me on Twitter or LinkedInCheck out my website