siliconrise.in

Slide 1 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 2 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 3 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here

siliconrise.in

Slide 1 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 2 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 3 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here

Influential World

Exploring the Impact of OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns

OpenAI Uncovers and Disrupts Global Influence Operations Exploiting AI

OpenAI announced on Thursday that it successfully intercepted five covert influence operations originating from China, Iran, Israel, and Russia. These operations aimed to manipulate public discourse and political outcomes online while concealing their true identities.

AI in Disinformation Tactics

Over the past three months, these malicious activities leveraged OpenAI’s models to generate comments and articles in various languages, create fake social media personas, conduct research, debug code, and translate and proofread texts.

Russia’s Influence Operations

Two of the disrupted networks were linked to Russian actors. One, dubbed “Bad Grammar,” operated via at least a dozen Telegram accounts targeting audiences in Ukraine, Moldova, the Baltic States, and the United States with poorly crafted content in Russian and English.

“The network used our models and Telegram accounts to establish a comment-spamming pipeline,” OpenAI stated. “They debugged code to automate Telegram postings and generated comments in both Russian and English.”

Another Russian-linked operation, the “Doppelganger” network, used OpenAI’s models to produce multilingual content shared on platforms like X and 9GAG. This network focused on generating content that portrayed Ukraine, the US, NATO, and the EU negatively, while promoting Russia positively.

Other Notable Influence Operations

1. **China’s Spamouflage Network**: Utilized AI to research social media activity and generate posts in multiple languages targeting Chinese dissidents and Native Americans in the U.S.

2. **Iran’s International Union of Virtual Media (IUVM)**: Created and translated long-form articles and headlines for a website known as iuvmpress[.]co.

3. **Israel’s Zero Zeno Network**: Operated by the business intelligence firm STOIC, this network produced content critical of Hamas and Qatar, and supportive of Israel, targeting users in Canada, the U.S., India, and Ghana.

Ineffectiveness and Continued Vigilance

OpenAI emphasized that these campaigns did not significantly increase their audience engagement or reach. However, the potential for generative AI tools to facilitate more sophisticated misinformation remains a concern.

Ben Nimmo, OpenAI’s principal investigator of intelligence and investigations, commented, “So far, the situation is evolution, not revolution. That could change. It’s important to keep watching and sharing.”

Meta’s Insights and Actions

Meta’s latest Adversarial Threat Report highlighted similar findings, noting the removal of nearly 500 compromised accounts linked to STOIC. The report also detailed deceptive networks from Bangladesh, China, Croatia, Iran, and Russia, involved in coordinated inauthentic behavior (CIB).

Meta identified an AI-generated video news reader tactic and observed changes in the Doppelganger network’s strategies, such as using text obfuscation to evade detection.

Broader Implications

TikTok, facing scrutiny in the U.S., has also uncovered and disrupted several influence networks this year. A notable campaign, “Emerald Divide,” linked to Iran-aligned actors, targets Israeli society using AI-generated deepfakes and strategically operated social media accounts.

Conclusion

These developments underscore the evolving landscape of AI-powered disinformation campaigns. While current impacts may be limited, the potential for more sophisticated and widespread misuse of AI in influence operations calls for ongoing vigilance and collaboration across platforms and organizations.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *