ChatGPT Shuts Down Several Accounts Linked to Iranian Misinformation Campaign
OpenAI has shut down multiple ChatGPT accounts that were utilized to disseminate false information as part of an influence operation traced back to Iran, the company announced on Friday.
This covert initiative, dubbed Storm-2035, produced content on various subjects, including the upcoming U.S. presidential election. Fortunately, the accounts were terminated before they could attract a significant number of viewers.
The operation also produced misleading posts regarding “the Gaza conflict, Israel’s participation in the Olympics,” as well as topics such as “politics in Venezuela, the rights of Latinx communities in the U.S. (in both Spanish and English), and Scottish independence.”
Additionally, some fashion and beauty articles were generated, likely to create a more credible appearance or to build a following, according to OpenAI’s statement.
“We take any attempts to misuse our services for foreign influence operations very seriously. To help disrupt such activities, we have not only removed the accounts but also shared threat intelligence with governmental and industry partners,” the organization explained.
No Genuine Engagement with False Information
OpenAI reported that there was no interaction from real users with the content produced by the operation.
According to the company, most social media posts made by these accounts received minimal engagement, with very few likes, shares, or comments. Moreover, there was no indication that the related web articles were circulated widely on social platforms. The influence operation was rated low on The Breakout Scale, which evaluates the effectiveness of influence efforts. It received a rating of Category 2.
The organization expressed its disapproval of efforts to “manipulate public opinion or influence political results while concealing the true identities and intentions of those involved.” They plan to leverage their AI tools to enhance detection and understanding of such malpractices.
“OpenAI is committed to identifying and addressing this kind of abuse on a large scale by collaborating with industry, civil society, and government, and by utilizing generative AI to amplify our efforts. We will keep releasing reports like this to encourage information sharing and best practices,” the company declared.
Earlier this year, the organization noted similar foreign influence efforts using its AI systems from locations like Russia, China, Iran, and Israel, but those endeavors also failed to reach a notable audience.