
vladimircaribb - stock.adobe.com
AI-generated deepfakes spread in Israel-Iran-U.S. conflict
Bad actors have used AI technology for more than a decade to spread misinformation and disinformation. However, the tools are getting more sophisticated and difficult to detect.
AI technology is being used to spread misinformation and disinformation in the conflict between Israel, Iran and the U.S.
Most recently, after U.S. military forces struck three Iranian nuclear energy sites on June 22, an image appeared on social media, purportedly showing the wreckage of a U.S. B2 bomber inside Iran's territory. Social media users said the American jet crashed and did not leave Iran because Iranian forces shot it down. The image turned out to be generated by AI.
The appearance of the faked image is one of many instances in which a photo shared and promoted on social media was generated by AI technology.
Another instance happened nearly two weeks ago after Iran fired ballistic missiles into Israeli cities after Israel's strikes on June 13. Following Iran's response, an AI-generated clip claimed to show destruction in Tel Aviv. It was later reported that the clip was made with Google AI tools and was recorded before the Iran’s missile attacks.
The spread of AI-generated misinformation and disinformation is not new. For example, AI-generated deepfakes were a key concern during the 2024 election and elections before that and continue to grow with easy access to AI tools and technology.
"This problem is not going away," said Chirag Shah, professor of information and computer science at the University of Washington. "The problem is here to stay."
Difficult to detect
Shad added that part of the problem is that it's becoming harder to detect whether something is real, and detection tools and techniques are starting to fail.
"It's becoming increasingly difficult even to identify that things are fake until later," he said.
Part of the reason detection is getting harder is that generative AI imaging tools are becoming more sophisticated.
"Part of the technique is to fool the detectors," Shah said.
Those spreading fake images are using more sophisticated methods, targeting an already biased audience.
However, Emmanuelle Saliba, chief investigations officer at GetReal, a firm specializing in detecting and mitigating threats from malicious generative AI content, said part of the problem is that such detectors rely on AI technology to detect AI-generated content. GetReal offers technology to verify and authenticate digital content files.
"This is an arms race if you're only using AI," Saliba said. She added that there is an element of forensics analysis that GetReal uses to see what is AI-generated and what isn't.
Most recently, GetReal worked to prove whether a six-second video showing Israel striking Iran's Evin prison was true or not.
According to a post on LinkedIn by GetReal co-founder Hany Farid, the video is likely AI-generated. It could add to "a growing and troubling trend of fake content circulating online as major world events unfold," Farid wrote.
Even experts cannot determine whether such a video is AI-generated, which shows the complexity of detection.
"It's complex and changing every day," Saliba said. "The solution is going to have to be a mixture of awareness of these tools' capabilities and technology."
Another problem is that even after forensics has been performed on the AI-generated deepfake image or video, perceptions surrounding that video do not change, said Joshua McKenty, founder and CEO of Polyguard, a cybersecurity company that works to stop AI-driven fraud.
“Whatever narrative was attached to that video is already out there," McKenty said. "If they're posted by Israel and they're like ‘this is what we saw last night over Tel Aviv,’ that's what the media says happened last night and the same over in Tehran."
He added that it is certain that both Israel and Iran have a history of using deepfakes and have the capacity to use bots to amplify messages. Israel is listed as being among the top countries in AI technology and cyber capabilities. Meanwhile, Iran is working to become the top 10 AI nation by 2032.
“We know to a certain extent these are also being used as deploys in diplomacy in the sense of we are building the illusion of a consensus, or the illusion of a rebellion or the illusion of whatever in the populace, to say people want this to happen,” McKenty added.
More accessibility means more awareness needed
While the spread of AI-generated content in the Iran-Israel conflict is no different from past AI-generated deception, the tools' accessibility has made it more potent. Also, many of them are free.
"Everyone has access to these tools to make hyper-realistic fakes of anything you can imagine, and that's created kind of a flood of these images," Saliba said. Moreover, the release of the latest versions of AI tools and technologies also adds to the sophistication of these new images.
With Iran and Israel, Google's Veo 3 model was released just about a week before the conflict.
Google did not immediately respond to a request for comment.
"We were seeing a large portion of the content being created using Google's Veo 3," Saliba said.
The possible use of Veo 3 speaks more to how it was publicized and less about the technology itself, McKenty said.
“We've had video that was good beforehand, and we had audio that was good beforehand, but we didn't have one tool that did synchronize audio and video,” he said. Therefore, anyone in the deepfake industry has always had the ability to do what Veo 3 did. But with the new tool, there is more accessibility.
Unfortunately for deepfake makers, Veo 3’s watermark made it easy to detect that the content is AI-generated. In other cases, it's not so easy.
However, Nemertes CEO Johna Till Johnson said the spread of AI misinformation and disinformation content can be combated if content consumers look at material with a critical eye.
"There isn't such a thing as a truth meter," Johnson said. "The real solution is teaching people skepticism and the real ability to source information ... something that somebody said on Facebook doesn't mean it's true."
Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.