Vogon Today

Selected News from the Galaxy

StartMag

OpenAI’s deepfake debunker won’t be enough against misinformation. Report Nyt

OpenAI's deepfake debunker won't be enough against misinformation. Report Nyt

OpenAI has announced a detector for identifying content made with artificial intelligence that can help stem the problem of deepfakes, but will not solve it. The New York Times article

As experts warn that AI-generated images, audio and video could influence the fall elections, OpenAI is releasing a tool designed to detect content created by its popular image generator, DALL-E. But the prominent artificial intelligence start-up recognizes that this tool is only a small part of what will be needed to combat so-called deepfakes in the months and years to come – writes the New York Times .

STRENGTHS AND WEAKNESSES OF THE DEEPFAKE DETECTOR

On Tuesday, OpenAI said it will share its new deepfake detector with a small group of misinformation researchers so they can test the tool in real-world situations and help identify ways to improve it.

“This is to spark new research,” said Sandhini Agarwal, an OpenAI security and policy researcher. “It's really necessary.”

OpenAI said its new detector was able to correctly identify 98.8% of images created by DALL-E 3, the latest version of its image generator. But the company said the tool was not designed to detect images produced by other popular generators such as Midjourney and Stability.

Because this type of deepfake detector is probability-based, it can never be perfect. So, like many other companies, nonprofits, and academic labs, OpenAI is working to combat the problem in other ways.

AUTHENTICITY STAMPS

Like tech giants Google and Meta, the company has joined the steering committee of the Coalition for Content Provenance and Authenticity, or C2PA, an initiative aimed at developing credentials for digital content. The C2PA standard is a kind of “food label” for images, videos, audio clips and other files that indicates when and how they were produced or altered, including with AI.

OpenAI also said it is developing ways to “watermark” AI-generated sounds so they can be easily identified in the moment. The company hopes to make these watermarks difficult to remove.

AN (ALMOST) IMPOSSIBLE MISSION

Backed by companies like OpenAI, Google and Meta, the AI ​​industry is facing growing pressure to account for the content its products produce. Experts are calling on the industry to prevent users from generating misleading and harmful material and to offer ways to trace its origin and distribution.

In a year marked by important elections around the world, calls for methods to track the provenance of AI content are becoming increasingly desperate. In recent months, audio and images have already influenced political campaigning and voting in places like Slovakia, Taiwan and India.

OpenAI's new deepfake detector can help stem the problem, but it won't solve it. As Agarwal said: In the fight against deepfakes, “there is no silver bullet.”

(Excerpt from the foreign press review edited by eprcomunicazione )


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/lo-smaschera-deepfake-di-openai-non-bastera-contro-la-disinformazione/ on Sat, 18 May 2024 05:43:44 +0000.