As deepfakes continue to surge globally, OpenAI, a generative AI company, actively combats misleading content created by its popular image generator, DALL-E.
Just last week, the company announced its plans to release a “deepfake detector” tool capable of identifying manipulated images, audio, and video generated by AI from its earlier text-to-image model, DALL-E 3.
OpenAI’s Deepfake Detector: 98% Accuracy in Tests
OpenAI reports that internal tests show the deepfake detector can accurately identify 98 percent of DALL-E 3 images with a low false positive rate of less than 0.5 percent. However, before releasing it publicly, OpenAI will first share the tool with a select group of disinformation researchers to test its effectiveness in real-world scenarios.
The deepfake detector works by providing a binary response of either “true” or “false,” indicating the likelihood of an image being generated by DALL-E 3. Additionally, it offers a simple content summary stating that “this content was generated with an AI tool,” including fields to flag the app or device and AI tool used.
To develop the tool, OpenAI has added metadata to all images created and edited by DALL-E 3, aiming to verify the content’s source and analyze information to prevent the spread of disinformation online.
OpenAI’s Deepfake Detector Partners with Google and Microsoft for Industry Standards
Despite its importance, the current version of the deepfake detector has limitations, as it was only tested with DALL-E 3 images. To address this, OpenAI partners with Google and Microsoft in the Coalition for Content Provenance and Authenticity (C2PA) to develop ethical standards across the industry.
The C2PA aims to display when and how digital content was created with AI, akin to a “nutrition label” for digital content, filling a crucial gap in digital content authenticity practices.
In addition to the deepfake detector, OpenAI works on a “tamper-resistant watermarking” tool for its digital content, making it difficult to remove without noticeable degradation and detecting whether the content was generated using generative models.
Despite these efforts, the surge in deepfakes poses a significant challenge to the AI industry. Recent estimates indicate a tenfold increase in video and voice deepfakes shared on social media platforms, with particularly sharp rises in regions like the Middle East and Africa, North America, and Europe.
While OpenAI’s new deepfake detector may help address the issue, experts believe more action is needed as the fight against deepfakes continues.