A video of Gabon’s President, Ali Bongo, appeared on Jan 1, 2019, sparking a wave of confused reactions inside the country.
By Ty Joplin
May 8th, 2019
Speaking to Gabon in a customary New Years address, Bongo’s presence looked unsettling. In an uncharacteristically short, three-minute speech, Bongo’s eyes almost never move or blink. His body appears bolted in place in his chair, while his hands look like they were glued to each other in front of him.
After seeing the video, many in Gabon thought it was faked or manipulated. Ali Bongo had been missing from the country for months and feared dead or incapacitated. The video address was meant to quell those fears, but its strange nature only added to the rumor mill. A few days later, members of Gabon’s military decided the video was evidence enough Bongo was unfit to be president, so they launched a military coup.
The coup was a failure, but the video and its political impact is a canary in the coalmine. Faked videos are getting more sophisticated, and the ability to make a convincing one is getting easier. To understand this tech and its impact, Al Bawaba spoke with Ali Breland, a disinformation reporter and tech analyst who has written for The Guardian, Politico, NPR and The Hill among others. Breland covered the Gabon story and has closely followed the emergence of a new technology to manipulate videos, called deepfake.
Although the technology is relatively new, Breland thinks it could make a big impact in everything ranging from politics, the economy, and people’s trust in information they receive. Deepfakes can be used by autocratic regimes to discredit dissidents or convince a population a dead leader is still alive. They can be used by people looking to defraud a company. They can generate false statements, and cast doubt on real videos that may be suspected of being deepfaked.
They create fake news while discrediting real information; infusing an already-troubled media landscape with a new dimension of chaos.
And like many things digital and tech-related, its development was spurred by the porn industry, where users superimposed celebrities faces onto porn actors bodies.
To deepfake a video, an individual must put thousands of pictures or videos of an individual speaking into a database that is culled by an advanced algorithm. From there, software can collate those disparate images into a flowing video of that target individual lip-synching with someone else in real-time.
Most famously, deepfaked videos of former U.S. president Barack Obama and current president Donald Trump show how convincing the technology can be. There are also videos of Nicholas Cage appearing as Frodo in The Lord of the Rings, as a CIA Agent in Batman and Tommy Wiseau in The Room. All thanks to deepfake tech.
“It can do so much damage in so many different ways,” Breland says.
“You can have leaders say things they’ve never said, and that can be particularly damaging in fragile government, in nascent democracies like in Gabon where you can push for regime change.” Breland continued, saying it can create false realities, allowing both states and rebels to use deepfake tech in disinformation campaigns.
Individuals seeking to commit fraud can use deepfaked videos to send stocks of companies tanking, allowing them to manipulate the market.
On top of all this, the simple fact that a video can be deepfaked calls into question the veracity of real videos, making it even harder to trust the information circulated on the internet. These problems, Breland contends, are all compounded in fragile states where trust in official institutions and the press is already low, and conspiracy theories regarding the people in charge run rampant.
A well-placed deepfake can, in this context, catalyze political revolts that were brewing under the surface.
Deepfaking videos and manipulating audio of public figures, Breland speculates, may incentivize some to begin constantly filming themselves in order to create an online alibi for their activities.
“What it would be basically is that you log every bit of your life on video and audio so that [at] any point if someone creates a fake video of you doing something, you can point to the direct timestamp and say you were doing X at this time.”
This “lifelogging” borne from fear of being subject to a deepfake video may protect them in an immediate sense, but Breland warns it could also simultaneously create “mini-panopitcons” of self-surveillance.
The information gathered from these individually lifelogged databases would also inadvertently log anything that individual comes into contact with, generating a slew of privacy concerns that could steadily erase the separation between public and private life.
To listen to the whole conversation, click here