---Advertisement---

Explained: What Is Deepfake? Why Has US Cracked Down After the Shocking Scandal Involving K-Pop Star Wonyoung?

Deepfake is the illegal world of the Internet, where Artificial Intelligence, Machine Learning, and facial recognition algorithms are used to create images, videos, or audio depicting real or imaginary people.

Edited By : Pramode Mallik | Updated: Sep 2, 2024 16:50 IST
Share :
Deepfake
Deepfake

K-Pop sensation Jang Won-young, better known as Wonyoung, was shocked when she found her images morphed and adult content sold online particularly those targeting fifth-generation female idols. Korean government asked Telegram and other online platforms to help it combat Deepfake content. But it was too late. The 20-year-old singing sensation who debuted with the girl pop group Izone, is trending on social media platforms for the wrong reasons. It has raised concern about the existence of Deepfake and the increasing impact it can have on society.

What Is Deepfake?

Deepfake is the illegal world of the Internet, where Artificial Intelligence,  Machine Learning, and facial recognition algorithms are used to create images, videos, or audio depicting real or imaginary people. The technology has been misused to make child abuse, adult content, revenge videos, fake news, financial fraud, and bullying. It is used in all types of cybercrime as well.

How Did Deepfake Begin

It was a watershed moment when the Video Rewrite program was published in 1997. It could be used to modify existing video footage of a person speaking and it could be changed in such a way that he appears to speak the words contained in a different audio track. It was the first system that used the technologies of facial reanimation, and Machine Learning to make connections between the sounds produced by a video’s subject and the shape of the subject’s face.

Why Is US Divided Over Deepfake?

The US has been the focal point of research leading to the development of Deepfake, it has also been on the receiving end of the misuse of the technology. After many instances of using the technology to spread hatred, child abuse, and adult content, the lawmakers have come out openly against it.

In a bid to combat Deepfake, lawmakers of California have made a bipartisan move to stop it and drafted a bill that has been forwarded to Governor Gavin Newsome, who is supposed to sign it by September 30. Newsom hinted at signing the bill in July.

Will California Governor Sign Bill On Deepfake?

California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

However, the IT sector of the US is divided over the action being taken against the Deepfake. The bipartisan legislation drafted by the lawmakers requires developers to disclose the data they use to train their models. It can help to shed more light on how AI models work and prevent future catastrophic disasters.

If the California governor signs the bill, state and local agencies will be banned from using AI to replace workers at call centers. It may also impose penalties if dead people are cloned without the consent of their relatives.

States Crack Whip On Deepfake

Some other states too have swung into action and taken actions against the Deepfake. If AI is used in making political advertisements in the states of New York, Florida, and Wisconsin, it will have to make disclosures. Some other states like Minnesota, Arizona, and Washington, require AI disclaimers within a certain window before an election. Besides, Alabama and Texas have already imposed broader bans on deceptive political messages.

The Secretary of State’s office in Washington has placed a team to scan social media platforms for misinformation. The state has taken a major marketing campaign to educate people on how elections work and where to find trustworthy information.

Social Media Companies Take Actions

Social media companies have responded to the crisis, triggered by developments in Korea. Telegram has said that it would moderate harmful content on its platform including illegal content like child abuse and adult content.

YouTube too took action and demonetized a channel with more than one million subscribers owned by a right-wing South Korean YouTuber. It removed one of his videos after the YouTuber downplayed the seriousness of deepfake crimes and mocked women for expressing concern.

 

HISTORY

Written By

Pramode Mallik

First published on: Sep 02, 2024 03:59 PM IST

Get Breaking News First and Latest Updates from India and around the world on News24. Follow News24 on Facebook, Twitter.

Related Story