Social media algorithms are still failing to counter misleading content

As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

Tags: afghanistan, ai, artificial intelligence, deepfakes, disinformation, facebook, Featured, misinformation, social media, taliban, Twitter, video authenticator, youtube

Source link

Leave a Comment