With just under 100 days until November 5, tech experts are doubling down on warnings about artificial intelligence and the upcoming presidential election.
No other election cycle has been this vulnerable to cyberthreats with the rise of artificial intelligence, deepfakes and more supercharging disinformation campaigns, Defending Digital Campaigns coalition president Michael Kaiser told Axios.
"In this critical election year, trust in the digital integrity of political campaigns both nationwide and at the state level is more important than ever," Kaiser said in a statement.
In an election cycle when more voters than ever before could head to the polls, fear over how AI could impact confidence in elections is high.
A survey by Telesign, a customer identity and engagement solution, shows 75% of U.S. respondents believe misinformation has made election results inherently less trustworthy. Along the same lines, roughly three-quarters (73%) of Americans fear AI-generated content will undermine upcoming elections.
In particular, 81% of Americans fear that misinformation from deep fakes and voice clones is negatively affecting the integrity of their elections. Fraud victims are more likely to believe they have been exposed to a deep fake or clone in the past year (21%).
Even Elon Musk, the owner of X, shared a deepfake campaign ad for Vice President Kamala Harris without labeling it as misleading. The video uses an altered voiceover to make it seem as if Harris calls President Biden senile and refers to herself as the "ultimate diversity hire." The Harris campaign shot back at Musk over the video, saying in a statement to The Hill, "We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump."
Telesign's survey also shows 45% of Americans report seeing an AI-generated political ad or message in the last year, while 17% have seen one sometime during the last week.
"Despite advances in detecting and removing deepfakes, distribution of AI-generated content via fake accounts remains a key challenge," the survey noted.
At the same time, 69% of Americans do not believe that they have been recently exposed to deepfake videos or voice clones, per Telesign's survey. Those aged 45+ are much more apt to be unsure (41%) of whether they have seen an AI-generated ad or message than those ages 18-44 (20%).
While voters can do their best to spot red flags, tech companies have a role to play as well, according to tech reporter Ina Fried.
"The creators of this powerful tools need to do everything they can to label AI-generated images to ensure their technology isn't being misused," Fried told WBBM. "And the images are one thing, but I also think text -- just sewing discord through just false stuff."