Welcome To DailyEducation

DailyEducation is an open-source platform for educational updates and sharing knowledge with the World of Everyday students.

ASU Researchers Analyze Impact of Deepfakes on Society

dailyeducation

Administrator
Staff member
katy_perry_ai_deepfake.jpg


Plato feared the artist.

The ancient Greek philosopher, the original source of the notion that art imitates life, found imagery, at best, an entertaining illusion — at worst, a dangerous deception.

In her book, “ ,” Ajit Maan writes, “Neither reality, nor reason holds the power that artists do, because artists don’t just reproduce reality; artists provide a new way to view reality.”

is a professor of practice in the at Arizona State University. As an expert on defense and security strategy focused on narrative warfare, especially in large-scale conflicts, Maan provides a distinct lens through which to view the rising threat of deepfakes.

Thanks to technology powered by artificial intelligence, or AI, harnessing the power of art to imitate real people, real settings and real life is easier and cheaper than ever.

Deepfakes are a new power, the latest form of what Maan describes as a representational force, a communication mechanism that can be wielded and weaponized.

Deepfakes and deep deception​


Deepfakes are fake images, videos or voice recordings. These have become so convincing that an international financial worker was into making a $25 million payment to a cybercriminal posing as a company executive during a falsified, live video call.

While the ability to create fake images and video has been around for a while, , a professor of computer science and engineering in the , part of the at ASU, explains that advancements in AI have made this technology cheap and fast, placing it within easy grasp of scammers. Using tools that harness a type of AI called machine learning, involving banks of computers that can take in images and recordings of an intended target and combine them with datasets about human behavior, bad actors can quickly create realistic mimics of a person’s likeness.

As a thought leader on the principled use and ethical development of AI and a fellow in the , Kambhampati has long been working to raise awareness of this technology. Media outlets such as the often ask for his perspective on news involving AI; he frequently speaks at conferences and events; and he has been asked to advise the Arizona Supreme Court on the intersection of .

“The world is in a period of great change,” Kambhampati says.

Kambhampati notes that society has absorbed these kinds of changes in the past and says the need to secure the digital world mirrors the emergence of efforts to protect property in the real world.

“If I traveled to the 1940s in a time machine and tried to install a burglar alarm in a home, everyone in that era would have regarded it as ridiculous,” he explains. “At some point, people became comfortable with the idea that houses needed more protection and systems arose to meet those needs.”

More concerning than the use of deepfakes for financial scams is the ongoing threat that this new technology poses to 2024 global elections. In April, The Washington Post reported an uptick in the number of electoral deepfakes, noting that efforts had already been made to use fake audio and video recordings to disrupt elections in Taiwan, South Africa and Moldova.

The U.S. has issued , saying, “For the 2024 election cycle, generative AI capabilities will likely not introduce new risks, but they may amplify existing risks to election infrastructure.”

Both reports speculate that the source of such deepfakes are state-sponsored actors.

“Undermining public trust in government is high on the to-do list of those forces seeking to destabilize communities, nation-states, even global order,” Maan says.

When seeing is not believing​




Experts like Kambhampati endeavor to inform the public about the risks to upcoming elections. The professor has made to raise public awareness of deepfakes, speaking to local Arizona and national media outlets.

Thanks to efforts such as these, knowledge is increasing. In conducted by the Pew Research Center last year, 42% of Americans demonstrated recognition of deepfakes.

This work is paying off — up to a point.

Kambhampati says that Katy Perry’s by a fake image of the singer supposedly attending the 2024 Met Gala.

And experts worry about studies and research that suggest people might prefer the version of reality provided by deepfakes. They say that designed to detect deepfakes are coming online, but that might not be enough to stop the spread of misinformation.

Though almost immediately identified and discredited, was still shared via social media more than 2 million times. A recent reported on the use of AI to generate fake nude images of teenagers — suggesting that even when people were made aware the images were fake, they still had negative feelings about the victims.

, an assistant research professor in the ASU , has been and irrational beliefs.

“We can all be fooled by a deepfake if it’s high quality enough,” Barlev says. “But there’s a perhaps less obvious reason deepfakes spread, and it’s one I’ve been especially interested in. Individuals might sometimes be socially motivated to believe the deepfake.”

Barlev says believing and spreading deepfakes can sometimes align with an individual’s existing beliefs or internal goals. This phenomenon is known as confirmation bias.

“Our minds are equipped with lots of psychological tricks — confirmation bias is one of them — which allow us to fulfill social motivations like signaling our group affiliation and commitment, rising in prestige, or derogating disliked individuals and groups,” Barlev says.

The future of the truth​




Kambhampati believes we will see increasing use of technological solutions to identify deepfakes, and he plans to continue efforts to educate the public.

“The biggest solution is education,” he says. “We’re going to need to learn not to trust our eyes and ears.”

But confirmation bias might be the toughest obstacle to overcome to stop the spread of misinformation.

“I think deepfakes will remain a serious problem,” Barlev says. “Lots of people are worried about how realistic deepfakes are getting, and I think that’s a real concern. But we should be equally concerned about how deepfakes — realistic or not — are used in socially motivated ways.”

Maan writes that disinformation, like deepfakes, can do something powerful for its intended audience.

“Why does disinformation stick even when it has been proven false? The answer is because the disinformation is more meaningful to the audience than the truth,” she says.

Kambhampati agrees, saying, “In the end, if you want to choose to believe things that aren’t real, computer science can’t help you.”

Perhaps Plato was right to worry.
 
Back
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features of our website. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock