When 75-year-old protester Martin Gugino approached a police officer during a Black Lives Matter protest in Buffalo, US, he was met with a shove. He tumbled backwards, cracking his head on the concrete ground. Blood trickled from his ear as he lay motionless. Police marched past, barely breaking their stride.
The incident, caught on camera and widely circulated on social media, was definitive. Contrary to an initial Buffalo police statement on 4 June, Gugino was not “was injured when he tripped & fell”. He was pushed by a police officer.
Case closed? Not if you’re the president of the United States.
On 9 June, Donald Trump tweeted that Gugino “could be an ANTIFA provocateur” and suggested that he was attempting to “scan police communications in order to black out the equipment”.
“Could be a set up?” he added, while Gugino was recovering in an intensive care unit in hospital.
Trump made this potentially defamatory claim to his 82 million followers without providing any evidence. Experts have cast doubt on whether it is technically possible to use a mobile phone to block police equipment.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“Even if you were attempting to scan police radios in order to jam them using a mobile phone, which is what [Gugino] appeared to be holding, this is not the equipment you would use,” said professor Alan Woodward of the University of Surrey, speaking to the BBC. “You would need much more sophisticated scanning equipment.”
Buffalo protester shoved by Police could be an ANTIFA provocateur. 75 year old Martin Gugino was pushed away after appearing to scan police communications in order to black out the equipment. @OANN I watched, he fell harder than was pushed. Was aiming scanner. Could be a set up?
— Donald J. Trump (@realDonaldTrump) June 9, 2020
Gugino, only recently released from hospital, said through his lawyer that he didn’t know why Trump had made “a dark, dangerous and untrue accusation” against him.
A search of Gugino’s name on Twitter shows right-wing accounts repeating the Buffalo conspiracy theory narrative. This has since morphed into a more outlandish conspiracy: that Gugino used a device to spurt fake blood from his protective face mask – a claim that would require the medical staff who treated him and confirmed his injuries to be part of the “set up”. Google trend data shows a spike in searches for ‘Martin Gugino fake’ on the same day as Trump’s Buffalo protester conspiracy theory tweet.
Beyond the controversy that Trump is no stranger to courting, his peddling of the Buffalo protester conspiracy has had a clear impact: it has given his supporter base fuel to question what they have seen with their own eyes.
Social media companies have provided a platform to share such conspiracies with little done to combat misinformation. And with deepfakes firmly on the horizon, this is a challenge that is only going to become more severe.
Trump and deepfakes: Heading towards “digital dystopia”
Deepfake technology is a synthetic video creation technique that uses artificial intelligence to replicate a person’s likeness, with a superimposed face matching the movements and speech of the original footage.
The Buffalo footage was not doctored. But Trump’s conspiracy tweet shows how easy it is to erode trust in unambiguous footage – and how much worse it will be when deepfakes become widespread.
Hany Farid, a professor at the University of California, Berkeley, has spent decades studying digital manipulation. He told Verdict that not being able to trust what we see and hear online has grave consequences for democracy – and adding deepfakes to the mix will push us towards “digital dystopia”.
“Although the internet was supposed to democratise access to information and decentralise knowledge, it has also created dangerous and ugly echo chambers and filter bubbles where lies and conspiracies spread at light speed and become more outrageous and dangerous each day,” he said.
“If we cannot trust what we see, hear, and read online, where are we as a society and democracy? If we do not share a common factual system, how do we move forward with reasoned and thoughtful debate? The injection of deepfakes into an already dysfunctional online ecosystem will only plunge us further into this digital dystopia.”
From Nicholas Cage to threatening national security
Deepfakes today are largely used for academic and comedic purposes, such as superimposing the face of actor Nicholas Cage onto popular footage. Their most common use – superimposing celebrities into pornographic videos – has darker undertones.
Many of the deepfakes online today are crude and easy to spot. But technologists fear that once it becomes almost impossible tell the difference between real and computer-generated footage, and the barriers to access that technology are lowered, deepfakes will pose a major threat.
Republican senator Marco Rubio is one to have voiced concerns about deepfakes, describing them as a danger to national security.
“The vast majority of people watching [a deepfake image of a politician] on television are going to believe it, and if that happens two days before an election, or a night before an election, it could influence the outcome of your race,” he told the New York Times in 2018.
For all the risk posed by such a video, Farid believes the real danger is that authoritarian leaders, along with their supporters, will have greater scope to question the legitimacy of facts they do not like.
“We are already seeing this so-called liar’s dividend play out in cases like Trump’s promotion of lies and conspiracies, and this problem promises to only get worse as deepfake technology improves and spreads,” said Farid.
Buffalo protester conspiracy shows social media unprepared for deepfakes
Trump’s unfounded retelling of the Buffalo video, and the resulting conspiracy surrounding the protester, shows that the ground is ripe for trust in video footage to be eroded by deepfakes. But it also demonstrates that social media platforms are woefully underprepared.
In recent weeks Twitter has ramped up its tagging of misinformation, clashing with Trump twice after the social media firm fact-checked a misleading statement about postal voting and placing a warning label over a tweet in which, it said, Trump glorified violence. Meanwhile Facebook chose to leave Trump’s “looters get shooters” post up and without a warning, leading to unprecedented internal strife at the tech giant.
However, Trump’s Gugino tweet remains up and without a warning tag, with Twitter citing that it did not violate any of its policies.
The disparity in regulation underscores the minefield that social media firms have to navigate already, without deepfakes flooding their platforms. Earlier this year, both Facebook and Twitter banned deepfakes from their platforms in a likely move to prevent deepfake scandals during the 2020 presidential election. Even this is problematic, though, with social media platforms having to cast a net that doesn’t catch harmless deepfakes, such as those used created for comedy.
But for Farid, deepfakes make up just a small part of a broader problem. He believes that the conversation needs to move beyond deepfakes to other online harms, including “child sexual abuse, extremism and terrorism, the sale of deadly drugs and weapons, non-consensual pornography, and mis and dis-information”.
Holding social media firms accountable for harmful content is the key, he said.
“For too long, Silicon Valley has hidden behind the mantra of ‘we are just a platform and we are not responsible for what users do on our platform’. This is inexcusable and fundamentally misses the point that not only do these companies host illegal and dangerous content, they actively promote it,” he said, pointing to the engagement algorithms favoured by social media firms.
“All of these services have figured out that divisive content sells and so their algorithms push the divisive, hateful, conspiratorial, and out outrageous.”
By this measure, Trump’s tweet was successful, garnering more than 158,000 retweets and 190,000 likes. If Trump’s aim was to sow seeds of doubt around the Black Lives Matter protests, it appears to have worked among pockets of his supporters. In the bowels of social media posts, bots, sock puppet accounts and tin-foil hat enthusiasts question facts established in pixels on other videos of police brutality.
Unless social media companies up their game, the problem will be significantly worse when deepfakes become pervasive.
Read more: Deepfakes – a bit of fun or politically weaponized content?