This year promises to be a phenomenal year for electoral governments, as billions of people, or more than 40% of the world's population, will be able to vote in elections. But nearly five months into 2024, some government officials are quietly wondering why the immediate risks of AI haven't materialized.
Voters in Indonesia and Pakistan are going to the polls, but they're seeing little evidence that viral deepfakes are distorting election results, according to a recent article in Politico. experts, high-tech executives, and outside watchdog groups.'' AI, they said, has not had the “massive impact” they had hoped. That's a painfully short-sighted view. reason? AI may be disrupting elections right now, we just don’t know it.
The problem is that authorities are looking for a Machiavellian version of the Balenciaga Pope.
Remember last year when an image of Pope Francis wearing an AI-generated down jacket went viral online? That's what many people now expect from generative AI tools. Able to summon human-like text, images and videos en masse, previous persuasion campaigns supporting Macedonia's Donald Trump and spreading divisive political content on Russia's Twitter and Facebook It will be just as easy to find. . So-called astroturfing was easy to spot when a bunch of bots were saying the same thing thousands of times.
However, it is much harder to find someone who has said the same thing thousands of times with slight variations. In a nutshell, that's what makes AI-powered detection of disinformation so difficult, and why tech companies need to shift their focus from virality to diversity. said Josh Lawson, head of election risk at Meta Platforms. He currently advises social media companies as director of the Aspen Institute, a think tank.
Remember the subtle power of words, he said. While much of the public debate about AI has been about images and deepfakes, “we knew that a large portion of persuasion campaigns could be based on text. That way, we could operate without getting caught. You can actually scale it up.”
Meta's WhatsApp allows you to do that thanks to its “Channels” feature that allows you to broadcast to thousands of people. For example, using an open-source language model to generate a large number of different text posts to send to Arabic speakers in Michigan, or using an open-source language model to send a large number of disparate text posts to Arabic-speakers in Michigan, or using an open-source language model to send a message to Arabic-speakers in Michigan who cannot vote because their local school polling place is flooded. You can message people and tell them that it will take 6 hours. I would add. “Now something like Operation Arabic is within reach of hands as unsophisticated as the Proud Boys,” he said.
Another issue is that AI tools are now widely used, with more than half of Americans and a quarter of Brits having tried them. This means that ordinary people can create and share misinformation, whether intentionally or not. In March, for example, a fan of Donald Trump posted a fake AI-generated photo of the president surrounded by black supporters to paint him as a hero to the black community.
“Fan content is being created by ordinary people,” said Renee DiResta, a researcher at the Stanford Internet Observatory who specializes in election interference. “Are they trying to deceive? Do they know?” The point is, the cost of distribution is already down to zero, so the cost of production is also down for everyone. (A March research paper by DiResta found that Facebook actively promotes AI-generated images, such as a bizarre image of Jesus fused to a giant shrimp, that garners hundreds of millions of engagements.) )
What makes Meta's job particularly difficult is that tackling this problem requires more than just restricting certain images from getting a lot of clicks and likes. AI spam does not require engagement to be effective. It is enough to flood the zone.
This month, Meta is trying to address the problem by applying a “Made with AI” label to videos, images and audio on Facebook and Instagram, but it's not safe for people to start assuming that anything without a label is real. For example, this approach can be counterproductive.
Another approach is for Meta to focus on WhatsApp, a platform where text is popular. Already in 2018, a flood of disinformation targeting the Labor Party's Fernando Haddad was spread through his Brazilian Messaging platform. Supporters of President Jair Bolsonaro, who won the presidential election, reportedly financed the mass targeting.
Meta could better counter AI-on-steroids repetition by aligning WhatsApp's policies with those of Instagram and Facebook, specifically banning content that discourages voting. WhatsApp's rules only vaguely prohibit “intentionally deceptive content” and “illegal activities.”
A Meta spokesperson said this means the company “enforces voter and election suppression.”
However, with clearer content policies, Meta will be given the power to deal with AI spam on WhatsApp channels. That's needed “for aggressive enforcement,” Lawson said. If the company thought that wasn't the case, it wouldn't have had more specific policies regarding voter interference on Facebook and Instagram.
AI tools have a more diffuse and subtle effect, and therefore rarely have a decisive impact. As more synthetic content floods the Internet, we must prepare for more noise than signal. That means technology companies and officials shouldn't be satisfied that AI won't have a “massive impact” on elections. Quite the opposite.
Copy story link
Invalid username/password.
Please check your email to confirm and complete your registration.
Please use the form below to reset your password. Once you submit your account email, you will receive an email with a reset code.
” previous
Opinion: The collapse of teacher morale is here.
Next ”
Our take: Portland's new bishop has a chance for a fresh start
Related article