It was around November 2020. At the top of the hour, MBN’s anchor Joo-Ha Kim started going through the day’s headlines. On the screen was, in fact, a deepfake version of herself. MBN had created an AI copy of a human news anchor by feeding ten hours of Kim’s video into its deep learning algorithm. The AI news anchor could replicate nuances of Joo-Ha’s voice and facial expressions. MBN reported its plans to use this AI system for future breaking news reports, inciting mixed responses from the public.
Deepfake technology creates images or videos of fake events or people. Usually, deepfake refers to a video wherein a person is replaced with someone else’s likeness. The process of making a deepfake video is quite tedious [1]. Two AI algorithms, a decoder and an encoder, process a person’s images to replicate them. High graphics performance and processing power can reduce the time a computer takes to create deepfakes. Initially used in the adult industry and social media pranks, the technology has slipped into photorealism from the uncanny valley*. In other words, mediocre AI copies of human appearance make us uneasy, but deepfakes have become indistinguishable from authentic videos to the naked eye.
*On the spectrum of human likeness, the uncanny valley refers to the range wherein an image resembles a human but looks fake. Examples are mannequins and humanoid robots in sci-fi movies from the ‘90s.
This technology has had a positive impact on our society: imitating the voices of people who lost it to diseases; improving the quality of older films; reviving dead actors and actresses for new movies; presenting artworks on display in a more engaging manner by digitally having the creator in the exhibition.
But despite these benefits, no one has a monopoly over deepfakes. In other words, the same technology could be misused to cause alarming consequences. This warning rings true when it has become widely accessible on the internet. For example, some have created fake pornographies by placing the faces of celebrities onto already existing videos. These videos can have a detrimental impact on their public image. Another concern is fabricated videos of political authorities spreading false information to the public. These videos may incite [2] panic or public backlash [3] against these figures. In our private lives, scammers might use deepfakes to mimic a trusted individual to take our money.
Deepfake technology deserves appreciation for how advanced it is. However, despite its harmless roots, it can cause harm to public figures and blur the line between what is real and fake, especially since an average person doesn’t have the tool to detect deepfakes. Given these issues that AI presents, many argue that one of the ways to solve this is to develop higher-quality AI that is trained to identify real videos.