Vogon Today

Selected News from the Galaxy

StartMag

Will the US presidential elections be deepfake-proof? Report Wsj

Will the US presidential elections be deepfake-proof? Report Wsj

The proliferation of deepfakes comes as social media companies seek to avoid having to adjudicate thorny issues surrounding the content of US politics. Here's how they're moving. The Wall Street Journal article

The explosion of artificial intelligence technology is making it easier than ever to deceive people on the Internet and is turning the 2024 U.S. presidential election into an unprecedented test of how to control deceptive content.

BIDEN'S FAKE PHONE CALL

A first salvo was fired last month in New Hampshire. Days before the state's presidential primary, an estimated 5,000 to 25,000 phone calls were sent urging recipients not to bother voting.
“Your vote will make a difference in November, not this Tuesday,” the voice said. She looked like President Biden, but was created by artificial intelligence, according to an analysis by security firm Pindrop. The message also discouraged independent voters from participating in Republican primaries.

On social media, however, the origin of the call has been the subject of debate. On Meta Platforms' Threads application, some users saw an attempt to suppress voter turnout. “This is election interference,” one of them wrote. On former President Donald Trump's website, Truth Social, some users blamed Democrats for the call. “Probably not a fake,” one of them wrote.

When Pindrop analyzed the audio, it found telltale signs that the call was fake. Biden's voice pronounced the noisy fricatives that make up the letters S and F, for example, in a less than human way – writes the WSJ .

Two weeks later, the New Hampshire attorney general's office said it had identified a Texas-based company, Life Corp., as the source of the calls and issued an order to stop the calls, citing the law against voter manipulation. Life Corp. representatives did not respond to emails seeking comment.

THE USE OF AI ON SOCIAL MEDIA AND THEIR INFLUENCE ON ELECTIONS

Thanks to recent advances in generative artificial intelligence, anyone can create increasingly convincing but fake images, audio, and video, as well as fictitious social media users and bots that appear human. In 2024, when elections will be held around the world, voters are already encountering AI-powered falsifications that risk confusing them, according to US researchers and officials.

The proliferation of AI fakes also comes as social media companies are trying to avoid having to adjudicate thorny issues surrounding the content of U.S. politics. The platforms also say they want to respect free speech considerations.

According to the International Foundation for Electoral Systems, national elections will be held this year in about 70 countries, covering about half the world's population (about 4 billion people).

While AI manufacturers and social media platforms often have policies against using AI deceptively or to mislead people about how to vote, it is uncertain how well these companies are able to enforce those rules.

WHAT OPENAI AND META WILL (NOT) DO IN VIEW OF THE ELECTIONS

OpenAI CEO Sam Altman said at a Bloomberg event in January at the World Economic Forum's annual meeting in Davos, Switzerland, that although OpenAI is preparing safeguards, it is still cautious about how his company's technology could be used in elections. “This year we will have to follow the situation closely,” Altman said.

OpenAI said it has taken a number of measures to prepare for the election, including banning its tools from being used for political campaigns, encoding provenance details of images generated by its Dall-E tool, and answering questions about how and where to vote in the United States with a link to CanIVote.org, operated by the National Association of Secretaries of State.

In early February, the oversight board of Facebook's parent company Meta Platforms called the platform's rules regarding altered content inconsistent, after reviewing an incident last year in which Facebook failed to remove an altered video of Biden.

The committee, an external body created by the company, found that Facebook complied with existing policy, but said the platform should act quickly to clarify its policy on manipulated content before the next election. A Meta spokesperson said the company was reviewing the committee's recommendations and would respond in the coming months.

Meta says his plan for the 2024 elections is largely consistent with previous years. For example, it will ban new political ads in the last week before the November elections in the United States. Meta also labels photorealistic images created with its AI feature.

DEEPFAKES AND POLITICS

Those who have studied elections debate how much an AI deepfake can actually influence someone's vote, especially in America, where most people say they have probably already decided who to support for president. However, the very possibility of AI-generated fakes could also muddy the waters in another way, causing people to question even real images and recordings.

AI claims are being used to "discredit things that people don't want to believe," such as legitimate videos surrounding the Oct. 7 Hamas attacks on Israel, said Renée DiResta, head of research at the Stanford Internet Observatory .

Social media giants have been grappling with issues surrounding political content for years. In 2020, they made aggressive efforts to control political discourse, partly in response to reports of Russian interference in the US election four years earlier.

Now they are loosening their grip on some aspects, especially Elon Musk's X.

SOCIAL MEDIA HAS MOVED BACK IN THE NAME OF FREEDOM OF SPEECH

After acquiring Twitter in 2022, Musk rebranded the site and eliminated many of its previous restrictions in the name of free speech. X reinstated many previously suspended accounts and began selling previously verified ticks targeting prominent people. X also cut more than 1,200 reputation and security employees, according to the data communicated last year to an Australian authority for online safety, in the context of generalized layoffs that according to Musk were necessary to stabilize the financial situation of the agency.

More recently, X said it plans to hire more security staff, including about 100 content moderators who will work in Austin, Texas, and other positions globally.

YouTube said it has stopped removing videos that allege widespread fraud in the 2020 U.S. election and other past elections, citing concerns about limiting political speech. Meta took a similar position when it decided to allow political ads that questioned the legitimacy of Biden's 2020 victory.
Meta also laid off many employees working on election policies during broader layoffs that began in late 2022, though the company says its overall trust and safety efforts have increased.

X, Meta and YouTube reinstated Trump after banning him following the January 6, 2021 attack on the US Capitol, citing that the public should be able to hear the candidates' words. Trump has repeatedly claimed that he won the 2020 election or that it was “rigged.”

Katie Harbath, Facebook's former director of public policy, said platforms have exhausted themselves trying to adjudicate issues surrounding political content. “There is no clear agreement on exactly what the rules and sanctions should be – he added -. Many of them said, 'It's probably better for us not to intervene.'”

X AT THE TIME OF THE 2020 US ELECTIONS

The companies say they are committed to combating misleading content and helping users get reliable information about how and where to vote. X says its efforts include strengthening the Community Notes fact-checking feature, which relies on volunteers to add context to posts.

Critics, including Musk and many conservatives, have criticized the measures taken by social media giants to manage political content around 2020, particularly Twitter. They referred, for example, to an episode that occurred shortly before the November 2020 vote, when Twitter temporarily blocked links to New York Post articles about Hunter Biden, son of current President Biden. (The Post and the Wall Street Journal are both owned by News Corp..)

Twitter executives later admitted they had overstepped the mark, but said they acted out of concern about potentially hacked material, not out of political motivation.

DANGER OF INTERFERENCE FROM ABROAD

Other changes this election cycle have resulted from a lawsuit led by the Republican attorneys general of Missouri and Louisiana, who argue that Biden administration officials have policed ​​social media posts in ways that amount to unconstitutional censorship. Lower courts have issued rulings that impose limits on how the federal government can communicate with social media platforms, but the Supreme Court later stayed those decisions. The case is now before the Supreme Court. Congressional Republicans have also investigated anti-disinformation efforts.

“We are interacting with social media companies, but all of those interactions have changed dramatically following the Court's ruling,” Federal Bureau of Investigation Director Christopher Wray said during a Senate hearing in October. He said the agency was acting “out of an abundance of caution.”

Democratic officials and disinformation researchers say such communications are critical to combating nefarious activities online, including foreign influence efforts.

Federal authorities say they are on alert. So far, the United States has not identified a major foreign-backed interference operation targeting the 2024 election, according to senior intelligence officials.

General Paul Nakasone, head of Cyber ​​Command and the recently retired National Security Agency, vowed before resigning that the 2024 US elections would be "the safest we've had yet" from foreign interference. “If this isn't necessarily going to work with the same methodology as '22 or '20,” he added, “then we need to find new ways to do it.”

(Excerpt from the foreign press review edited by eprcomunicazione )


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/mondo/le-presidenziali-usa-saranno-a-prova-di-deepfake/ on Sun, 18 Feb 2024 06:14:39 +0000.