How Baby Shark is Endangering the US Midterm Elections



Nope, Joe Biden did not sing the “Baby Shark” song.

But a virus fake deep video showing the US President throwing himself into the air of children convinced many people that he had.

Similarly, a clip of a child shouting “shut the shit up” at First Lady Jill Biden at a reception to celebrate Diwali never happened.

The clip was shared on Twitter by a pro-Maga account where he received over 600,000 views by Thursday.

A succession of deepfake videos targeting political figures went viral before November halfwaystoking fears they could play a role in determining the outcome of close races.

Deepfakes are created using artificial intelligence programs that are manipulated from genuine videos to distort a person’s actions and words.

Once an expensive and complex process, creating hyper-realistic content is now cheap and easily accessible to developers on sites like GitHub.

At a time when trust in the media is close to a historic low, according to Gallup Research, and the Big Lie that Donald Trump won in 2020 created deep distrust of electionsdeepfakes are the latest weapon in an ongoing information war.

Experts say The Independent that as technology improves, deep fakes will become harder to spot and further erode public trust.

But they believe the real danger lies in the clip’s ability to circulate quickly online and that social media companies are doing nothing to stem the threat.

“A deepfake video means nothing. Video becomes dangerous when it goes viral,” said Wael Abd-Almageed, director of the Visual Intelligence and Multimedia Analytics Laboratory at the University of Southern California (USC). The Independent.

As deepfake videos become “extremely widespread”, social platforms were shrugging off the problem, Dr Abd-Almageed said.

He developed tools with his students at USC that would allow social media companies to flag deepfakes, or even prevent fake content from being posted altogether.

He said he offered the tools to social media companies for free and implored them to use them to stop the spread of misinformation, and was told explicitly, “We don’t care.

“There’s so much they can do, but they’re not interested in doing anything to stop misinformation. They just want to maximize user engagement, the longer people stay on the platform the more money they make from advertising.

Dr Abd-Almageed finds it “shocking and frightening” how little interest major social media platforms have shown in preventing the spread of deepfakes.

“No tool will be bulletproof, but if I could stop 50, 60, or 70 percent of these deepfakes, that’s way better than nothing. What they’re doing now is nothing,” he said. -he says.

“We keep eroding the line between what’s real and what’s not and at some point everything becomes fake, and we won’t believe anything. It is extremely important to keep trying to protect this line, but they are not interested,” said Dr Abd-Almageed. The Independent.

“It literally keeps me awake at night”

A report published by the Congressional Research Service in June warned that the proliferation of photo, audio and video forgery generated by artificial intelligence could present “national security challenges in the years to come”.

“Adversaries of the state or politically motivated individuals could release doctored videos of elected officials or other public figures making inflammatory comments or behaving inappropriately,” the researchers found.

“This could, in turn, erode public trust, negatively affect public discourse, and even influence an election.”

Dr Abd-Almageed agrees that a fake video could easily be used to manipulate the outcome of an election.

For example, a fake video claiming the president is extremely ill and dying is posted on social media three hours before polls close on Election Day.

The video goes viral, people start believing it and stop going to the polls. In the time it would take the White House to refute the story, voters may have already missed the window to vote.

“It literally keeps me awake at night,” says Dr Abd-Almageed The Independent.

“We have always grown up believing that seeing is believing, even after someone has debunked it, it becomes very difficult to reverse that belief.”

He said the solution could be for lawmakers to force social platforms to curb the flow of deepfakes.

“All of these platforms hide behind the fact that we are just a newsstand, we are not responsible for the content of our platforms. They must accept the moral and ethical responsibility to fight misinformation.

Separating the true from the false

Days after Russia invaded Ukraine, a video of Volodymyr Zelensky calling on his soldiers to surrender quickly spread online. The hackers managed to broadcast it briefly on Ukrainian television, before it could be debunked by Ukrainian officials.

The doctored clip of Mr. Biden singing “Baby Shark” is taken from a speech he gave at Irvine Valley Community College in California on Oct. 14.

In the clip, the president announces the national anthem, but instead of singing “The Star-Spangled Banner,” he sings the opening lyrics to the annoyingly familiar “Baby Shark.”

The initial clip posted on TikTok received half a million views before it was taken down and quickly spread to other social media sites.

Hundreds of commenters believed the clip was genuine.

“He lost his mind,” one person commented on TikTok.

“Ladies and gentlemen, this is who runs this country! How?” another wrote.

fact checkers Associated press debunked the clip, but their efforts to douse the flames of misinformation invariably reach a much smaller audience.

Deepfake creator Ali Al-Rikabi, a UK-based civil servant, told the Associated press that he used voice-cloning software to make Mr. Biden sing the lyrics to “Baby Shark” and lip-syncing software to match his lips to the words.

In a separate deepfake, footage of Jill Biden cheering and chanting before a Philadelphia Eagles NFL game was digitally altered to swap out the authentic audio with crude anti-Biden chanting.

Reuters reported that a single tweet from the fake clip received hundreds of thousands of views before it was taken down.

An AI voice clone is set to replace James Earl Jones as the voice of Darth Vader (Wikimedia Commons)

An AI voice clone is set to replace James Earl Jones as the voice of Darth Vader (Wikimedia Commons)

An analysis by artificial intelligence company Deeptrace in 2020 found that there were almost 15,000 deep fakes, more than double the number nine months earlier. Of these, 96% were pornographic, with the vast majority superimposing celebrity faces on graphic sexual content.

Exact numbers are hard to pin down, but experts say the number of deep forgeries has skyrocketed in the years since.

A popular subset of deep knockoffs features mashups of well-known actors in unknown roles, such as Jim Carrey in The Shining or Jerry Seinfeld in Pulp Fiction.

After actor Paul Walker was killed while filming Fast and Furious 7in 2013, Weta Digital used deepfake technology to map his face onto the bodies of his brothers Cody and Caleb to complete the film at great expense.

Artificial intelligence is also used to give new life to James Earl Jones’ portrayal of Darth Vader after the 91-year-old actor gave permission to clone his voice and record new lines for future Disney movies.

This same technology is readily available on sites like GitHub for anyone to use.

Siwei Lyu, director of the University at Buffalo’s Media Forensic Lab, studied user behavior around deepfakes and said The Independent that more needs to be done to educate social media users.

“The world of social media is very biased, very polarized. People live in their echo chambers. Everyone has confirmation bias, and we like to see evidence to support our beliefs,” Professor Lyu said.

“If everyone is more aware of this type of media manipulation. If they know it’s wrong, it will prevent them from having such an impact.

He shares concerns about the spread of deepfakes on election day and believes the dangers of tipping the scales in a close election are very real.

Professor Lyu said The Independent there are algorithms that can expose most fake clips, but the most savvy deepfake creators can get around them.

He said it was the responsibility of social media users to verify the source and authenticate clips before sharing them with friends.

A spokesperson for Meta, the parent company of Facebook and Instagram, provided The Independent with its policy on manipulated media.

Meta says he can’t define what constitutes misinformation as “the world is constantly changing, and what’s true one minute may not be true the next.”

It adds that it removes posts that incite violence or are “likely to contribute directly to interference in the workings of political processes”.

TikTok and Twitter did not respond to a request for comment.

In comments posted to Twitter on Thursday, Elon Musk said he didn’t want to see the platform become a “hellscape free for all”after stating that it would allow any content that does not violate local laws.

Source link

Comments are closed.