Tech CEO Shows Shocking Deepfake Of Kari Lake At Hearing On AI Impact On Elections

Forbes Breaking News
18 Apr 202408:40

TLDRRidel Gupta, a tech entrepreneur and founder of Deep Media, addresses a hearing on the impact of AI on elections, focusing on the threat of deep fakes. He explains that deep fakes are AI-manipulated media that can mislead or harm, and emphasizes the rapid improvement and decreasing cost of creating such content. Gupta outlines the importance of understanding generative AI technologies, including transformers, generative adversarial networks (GANs), and diffusion models. He highlights the societal risks posed by deep fakes, such as political manipulation and the erosion of trust in genuine content. Gupta advocates for a collaborative approach involving government, AI companies, platforms, journalists, and deep fake detection companies to combat the issue. He showcases Deep Media's efforts in detecting deep fakes and their commitment to setting industry standards for AI ethics and safety.

Takeaways

  • 🧑‍💼 The speaker, Riddell Gupta, is a tech entrepreneur focused on addressing the deepfake problem, having a background in machine learning and a company called Deep Media.
  • 📈 Deepfakes are becoming more sophisticated and cheaper to produce, with costs dropping from 10 cents to potentially 1 cent per minute.
  • 🚨 Deepfakes pose a significant threat to society, potentially disrupting trust in media and leading to plausible deniability for any content's authenticity.
  • 🤖 Three key technologies behind generative AI are the Transformer, Generative Adversarial Network (GAN), and diffusion models.
  • 🌐 The scale of deepfakes is vast, with the potential to affect up to 90% of online content by 2030.
  • 👥 Solutions to the deepfake problem require collaboration among government stakeholders, generative AI companies, platforms, investigative journalists, and deepfake detection companies.
  • 🔍 Deep Media has been assisting media professionals and is part of various initiatives aimed at detecting and combating deepfakes.
  • 🛡️ The company is involved in efforts to label real and fake content, collaborating with other tech companies and participating in research programs to improve detection methods.
  • 📉 Deepfakes represent a market failure and a tragedy of the commons, which can be mitigated through proper legislation and a focus on solutions.
  • 📈 The speaker emphasizes the importance of staying ahead in the 'cat and mouse game' between deepfake creation and detection technology.
  • 🎓 A deep understanding of AI and its capabilities is crucial for legislators to effectively address the issue of deepfakes in society.

Q & A

  • What is the definition of a deep fake according to the speaker?

    -A deep fake is a synthetically manipulated AI manipulated image, audio, or video that can be used to harm or mislead. It does not cover text.

  • What are the three fundamental technologies the speaker wants the legislators to keep in mind when discussing generative AI?

    -The three fundamental technologies are Transformer (a type of architecture), Generative Adversarial Network (GAN), and Diffusion Model.

  • Why is the speaker concerned about the scale of deep fakes?

    -The speaker is concerned because deep fakes are improving in quality very rapidly and becoming cheaper to produce, which could lead to a significant percentage of content online being fake by 2030.

  • What are some of the political implications mentioned regarding the use of deep fakes?

    -The speaker mentions deep fakes being used for political assassination, such as fake videos of political figures in compromising situations, and to create false support by making politicians seem more relatable.

  • How does the speaker describe the potential societal impact of deep fakes?

    -The speaker suggests that the prevalence of deep fakes could lead to a society similar to George Orwell's '1984', where trust in media is eroded, and plausible deniability becomes a tool for manipulation.

  • What solution approach does the speaker propose to combat the deep fake problem?

    -The speaker proposes a collaborative approach involving government stakeholders, generative AI companies, platforms, investigative journalists, and deep fake detection companies to work together to solve the problem.

  • What role does the speaker's company, Deep Media, play in addressing the deep fake issue?

    -Deep Media works on deep fake detection technology, collaborates with journalists and media outlets, and is part of initiatives like the DARPA Semafor and AI Force program to advance solutions.

  • How does the speaker's platform aim to deliver solutions?

    -The platform aims to deliver solutions at scale across image, audio, and video by using advanced AI to detect deep fakes without misclassifying real content as fake.

  • What is the significance of the speaker's mention of the cost to society if the deep fake problem is not solved?

    -The speaker is highlighting the potential economic and social costs associated with fraud, misinformation, and other crimes that could arise from the unchecked proliferation of deep fakes.

  • What is the speaker's view on the role of the free market and AI in addressing the deep fake issue?

    -The speaker is a believer in the free market and thinks AI can be used for good. They see deep fakes as a market failure and believe that proper legislation can help internalize the negative externalities associated with them.

  • How does the speaker describe the process of how an AI sees and learns from audio?

    -The AI sees audio as a visual representation, such as a graph, and learns from it by identifying key points and patterns in the voice, which helps in the detection of deep fakes.

  • What is the final example the speaker uses to demonstrate the capabilities of their deep fake detection system?

    -The final example is a high-quality deep fake video of Kari Lake, which the speaker's system was able to detect as fake, demonstrating the system's ability to stay ahead in the cat-and-mouse game of deep fake detection.

Outlines

00:00

🚀 Introduction to Deep Fakes and Their Impact

Ridel Gupta, the founder of Deep Media, introduces himself as a tech-savvy entrepreneur with a focus on machine learning and generative AI. He explains the concept of deep fakes, which are AI-manipulated images, audio, or video with the potential to cause harm or mislead. Gupta emphasizes the rapid advancement and decreasing cost of creating deep fakes, which poses a significant threat to society. He outlines the importance of understanding three fundamental technologies behind generative AI: Transformers, Generative Adversarial Networks (GANs), and Diffusion models. Gupta also highlights the need for collaboration among various stakeholders, including government, media, and technology companies, to address the deep fake problem. He concludes by stating that solutions exist but require collective action.

05:01

🛠️ Solutions to the Deep Fake Problem

Gupta discusses the market failure represented by deep fakes and views them as a tragedy of the commons. He believes that proper legislation can help internalize the negative externalities associated with deep fakes, thus fostering a healthy AI ecosystem. Gupta presents a slide-based overview of how AI perceives media and the importance of accurately detecting deep fakes without misclassifying real content. He demonstrates the capabilities of his platform in detecting deep fakes across various media types and emphasizes the low false positive and false negative rates of their detection system. Gupta also shows an example of a high-quality deep fake video, illustrating the system's ability to identify even the most sophisticated fakes. He positions Deep Media as a leader in setting the gold standard for deep fake detection, using their own generative AI technology to improve their detection methods without releasing it to the public.

Mindmap

Keywords

💡Deepfake

A deepfake refers to a synthetically manipulated image, audio, or video created using AI that can be used to deceive or mislead. In the context of the video, deepfakes pose a threat to society by potentially dismantling trust in media and causing political turmoil. The speaker mentions deepfakes of political figures like President Biden, Trump, and Clinton to illustrate their impact on elections and public perception.

💡Generative AI

Generative AI is a branch of artificial intelligence that involves creating content, such as images, audio, or video, that did not exist before. The video discusses generative AI in the context of deepfakes, emphasizing its role in creating convincing synthetic media. It is also associated with the technologies that enable the creation of deepfakes, such as Transformer models, Generative Adversarial Networks (GANs), and diffusion models.

💡Transformer

A Transformer is a type of AI architecture that is particularly effective at handling sequential data, such as language or audio. In the video, the speaker mentions Transformers as one of the fundamental technologies behind generative AI, which is critical for creating deepfakes. They are part of the complex systems that require significant computational resources and data to function.

💡Generative Adversarial Network (GAN)

A GAN is a type of AI system consisting of two parts: a generator that creates synthetic data and a discriminator that evaluates it. In the context of the video, GANs are a core technology in the creation of deepfakes, where the generator produces fake content and the discriminator learns to distinguish it from real content, thus improving the quality of the fakes over time.

💡Diffusion Model

A diffusion model is another foundational technology in generative AI, which is used to create high-quality synthetic media. The video script does not provide specific details about diffusion models, but they are mentioned as a key component of the generative AI technologies that enable the creation of deepfakes.

💡Political Assassination

In the context of the video, 'political assassination' refers to the use of deepfakes to damage the reputation or credibility of political figures. The speaker cites examples of deepfakes involving President Biden, Trump, and Clinton, which were created to mislead the public and potentially influence political outcomes.

💡Plausible Deniability

Plausible deniability is a situation where someone can claim that they are not responsible for an action because they cannot be directly linked to it. In the video, the speaker warns that deepfakes can lead to a scenario where politicians or other individuals can falsely claim that real content is a deepfake, thus undermining trust in genuine information.

💡Negative Externality

A negative externality is an unintended negative consequence that affects a third party who is not directly involved in an economic transaction. The speaker discusses deepfakes as a market failure and a tragedy of the commons, implying that the harm caused by deepfakes is a negative externality that should be addressed through proper legislation.

💡Content Authority Initiative

The Content Authority Initiative is a collaborative effort mentioned in the video that aims to label real and fake content. The speaker's company is part of this initiative, which includes companies like Adobe, working together to establish a system that can help distinguish between genuine and synthetic media.

💡DARPA Semaphor

DARPA Semaphor is a program referenced in the video that involves researchers, corporations, and government resources to address the issue of deepfakes. It is part of a broader effort to bring together various stakeholders to find solutions to the problem of synthetic media manipulation.

💡False Positive/Negative Rate

In the context of AI and deepfake detection, a false positive occurs when the system incorrectly identifies real content as fake, while a false negative is when it fails to detect fake content. The speaker emphasizes the importance of having a very low false positive/negative rate for effective deepfake detection systems, which is critical for maintaining trust in media and avoiding unnecessary alarm or misinformation.

Highlights

Ridel Gupta, a tech entrepreneur and hacker, founded Deep Media to combat the rise of deep fakes.

Deep fakes are synthetically manipulated AI images, audio, or video that can mislead or harm.

The human mind is particularly susceptible to manipulation by image, audio, and video content.

Three fundamental technologies behind generative AI are Transformer, Generative Adversarial Networks (GAN), and diffusion models.

Deep fakes are becoming increasingly realistic and affordable, with the cost per minute dropping rapidly.

By 2030, it's predicted that up to 90% of online content could be synthetic.

Deep fakes have already impacted elections, with manipulated videos of political figures being used for political assassination.

The real threat of deep fakes lies in their potential to undermine trust in genuine content.

Ridel Gupta warns that deep fakes could lead to a society resembling George Orwell's 1984.

Solutions to the deep fake problem require collaboration between government, generative AI companies, platforms, journalists, and detection companies.

Deep Media has worked with major news outlets to detect and report on deep fakes.

The company is part of various initiatives, including DARPA's Semafor and AI Force program, aimed at addressing the deep fake issue.

Deep Media is also involved in the Content Authority initiative to label real and fake content.

Ridel Gupta believes in the free market and the potential for AI to be used for good.

Deep fakes represent a market failure and a tragedy of the commons, which can be mitigated through proper legislation.

Deep Media uses its own generative AI technology to train detectors and set the gold standard for deep fake detection.

An example of a high-quality deep fake featuring Kari Lake was presented, demonstrating the capabilities of current technology.

Deep Media aims to stay ahead in the cat-and-mouse game of deep fake detection and generation.