Think about viewing a video of a world leader announcing war then discovering afterwards, it was never actually there. That's the frightening potential of deepfake technology.
In essence, deepfake technology employs artificial intelligence to generate hyper-realistic synthetic videos, audios, and images of individuals doing or saying something they never did. The name "deepfake" is derived from "deep learning" and "fake."
It began early in the 2010s in AI development labs, but went viral in 2017 when Reddit users started circulating celebrity face-swap videos. Ever since, deepfake quality has become so much better that even trained eyes have trouble distinguishing them.
Because they confuse the boundary between what is real and what is not. In an already information-swamped world, deepfakes toss rocket fuel on the fire impacting everything from news credibility to international politics.
Artificial Intelligence (AI) drives deepfakes. Generative Adversarial Networks (GANs) are a specific component that plays an important part.
Imagine GANs as a duel between two AI algorithms: one generates artificial images (the generator) and the other attempts to identify fakes (the discriminator). This duel goes on until the fakes are virtually undetectable.
Machine learning is applied to study thousands of facial expressions, emotions, and voice patterns to mimic someone's actions almost perfectly.
Audio deepfakes copy voices and speech patterns good for impersonating phone calls or voice assistants.
Video deepfakes generally exchange faces or modify expressions on live footage.
1) DeepFaceLab
2) FaceSwap
3) Reface
4) Zao
Some are harmless amusement, others are unsettlingly potent.
Hollywood employs deepfake technology for de-aging actors, bringing dead actors back to life, and making dazzling visual effects.
Deepfakes can be used to make students interact with AI-created historical characters or replicate old-time speeches.
It is employed by brands for customized, AI-made advertisements. Think of an ad where the actor calls out your name and mentions your city. Spooky, isn't it?
Those humorous face filters? Yes, that's a basic application of deepfake technology at work.
Deepfakes are being employed to manipulate political speeches, incite riots, or distribute fake propaganda — particularly during elections.
Most initial deepfakes involved celebrities, placing them in spurious explicit material. It's immoral and dangerous, but chillingly prevalent.
Scammers currently employ deepfake audio to impersonate CEOs and instruct fake transactions. Yes, it's already occurred in reality.
Deepfake victims experience anxiety, humiliation, loss of employment, and even PTSD. The psychological harm is quite real.
It is dependent. Producing deepfakes for satire or cinema is usually legal. But employing it for injury, fraud, or defamation is unlawful in most nations.
Other nations such as China and the U.S. have started enacting legislation to govern deepfake production and dissemination. But the legal community is lagging behind.
Where do we set the line? Is a deepfake spoof acceptable? What about resurrecting a deceased family member "back to life" in a video? The morals are unclear.
1) Flickering shadows
2) Incongruous lip movement
3) Abnormal blinking
4) Incongruous lighting
Software such as Microsoft's Video Authenticator or Deepware Scanner are designed to detect fabricated videos.
Google, Facebook, and Adobe are developing "content authenticity" standards and tagging systems to authenticate what's authentic.
Envision assisting stroke victims to regain speech through AI-driven avatars. Or allowing a person to video call with a stronger, more confident version of themselves.
As deepfakes get better, so do detection systems. It's an ongoing game of cat-and-mouse between creators and defenders.
Perhaps. As Photoshop did before them, deepfakes might eventually become a norm as long as people use them responsibly.
Deepfake technology is a double-edged sword. It promises potential for creativity, learning, and innovation on one side, but risk for truth, privacy, and democracy itself on the other. As users, makers, and citizens of the world, we need to remain educated, interrogate what we watch, and call for responsible use of AI.