Politics

Senate pursues action against AI deepfakes in election campaigns

Pinterest LinkedIn Tumblr

Politicians, like us all, often engage in hyperbole to make a point.

But don’t doubt the alliterative precision of Sen. Richard Blumenthal (D-Conn.) when he warns about “a deluge of deception, disinformation and deepfakes … about to descend on the American public.”

“There is a clear and present danger to our democracy,” he added for emphasis during the Senate Judiciary subcommittee hearing on election deepfakes that he chaired last week.

One thing we don’t need after the Jan. 6, 2021, Capitol insurrection is another danger to democracy. But unlike the televised violence that day, Blumenthal’s hearing showed how artificial intelligence can be used to subvert elections much more covertly than MAGA rioters supporting President Donald Trump attempted to do three years ago.

It’s a bipartisan threat that has generated bipartisan determination.

Two Democrats, Sens. Amy Klobuchar (Minn.) and Chris Coons (Del.), and two Republicans, Sens. Josh Hawley (Mo.) and Susan Collins (Maine), are pushing legislation that would ban deceptive AI materials in political ads. The bill, introduced in September, also would allow federal office seekers to ask U.S. courts to order removal of bogus information and to award compensation to the candidates.

But legislation doesn’t move swiftly, and the urgency is clear, as the hearing emphasized.

“Are we going to have to have an electoral disaster before Congress realizes, ‘Gee, we really should do something to give the public some sense of safety, some sense of certainty that what they’re seeing and hearing is actually real or is it in fact manufactured,’” asked Hawley, the top Republican on the privacy, technology and the law subcommittee, which held last week’s session.

Deepfakes with Trump and President Biden have already been used to fool the public.

Hearing witness David Scanlan, secretary of state in New Hampshire, recalled getting “ready to conduct a really good” presidential primary there the weekend before the vote in January. Then things changed. He started hearing about “a robocall using AI with President Biden’s voice on it, asking individuals not to vote in the election.” The call appeared to come from a phone number associated with a former Democratic Party official.

“It’s important that you save your vote for the November election,” the voice said. “Your vote makes a difference in November, not this Tuesday.”

The message was fake, as was the association with the party official.

“That’s what suppression of voter turnout looks like,” Blumenthal said after playing the audio during the hearing. The Associated Press said it “may be the first known attempt to use artificial intelligence to interfere with a U.S. election.”

Last month, BBC reported on bogus pictures of Trump surrounded by African Americans, apparently circulated to give a false impression about his level of Black support. Last year, deepfake images related to Trump’s court appearances showed him scuffling with police and wearing prison garb.

What’s also disturbing is how little effort it takes to fool people with today’s technology, which makes really good fakes simple. With free online programs, Blumenthal said, “voice cloning, deepfake images and videos are disturbingly easy for anyone to create.”

The one with Biden was done “by a street magician whose previous claim to fame was that he has world records in spoon bending and escaping straitjackets,” Blumenthal added. “And if a street magician can cause this much trouble, imagine what Vladimir Putin or China can do. In fact, they’re doing it.”

Five years ago, The Washington Post reported on a slick Russian campaign that used social media to discourage Black voters, according to documents released by the Senate Intelligence Committee. One poster showed a Black man’s face next to the words “I Won’t Vote.” That was primitive compared with today’s efforts.

While deepfakes involving Biden or Trump will get publicity if discovered, Blumenthal said that “local elections present an even bigger risk” because of the disturbing decline in local journalism, an issue the senator explored in a January hearing.

“When a local newspaper is closed or understaffed, there may be no one doing fact-checking, no one to issue those Pinocchio images and no one to correct the record,” he said last week. “That’s a recipe for toxic and destructive politics.”

Furthermore, a March Government Accountability Office report warned that “trust in real media may be undermined by false claims that real media is a deepfake.” In other words, AI makes it easier for fake news to trump real news.

The problem is growing fast. “Between 2019 and 2020, the number of deepfake online content increased by 900%,” according to the World Economic Forum.

But there are remedies for the toxins AI can generate.

At the hearing, Zohaib Ahmed, CEO and co-founder of Resemble AI in Santa Clara, Calif., urged the “creation of a public database where all generated election content is registered, allowing voters to easily access information about the origin nature of the content that they encounter.” He and others also suggested using digital watermarking technology to verify content authenticity.

Whatever remedies are used, the time is now. Some action is already underway. In February, following a request from Klobuchar and Collins, the bipartisan U.S. Election Assistance Commission voted unanimously to allow federal funds to counter disinformation “amplified by AI technologies.”

But “by the time the deepfake widely spreads, any report calling it a fake is also too late,” said Ben Colman, CEO and co-founder of Reality Defender, a tool that can detect deepfakes. “This is not fearmongering, nor is it AI alarmism, doomerism, or conspiracy-minded hyperbole. It is simply the logical progression of the weaponization of deepfakes.”

He applauded the legislation, the Protect Elections from Deceptive AI Act, but urged more action “by imposing real penalties on bad actors” who “morph reality and on the platforms that fail to stop their spread.”

This is personal for Klobuchar, who spoke about a dishonest Russian photo that indicated she funds Nazis in Ukraine. “This photo had a red circle around me in the background,” she said, “and then they put defund the police signs in the hands of the people at the rally that were never there.”

Klobuchar called for quick action on her bill and a strong approval vote in committee so “we can immediately get this thing heard …

“We really can’t wait.”

This post appeared first on The Washington Post