OpenAI says it has disrupted more than 20 operations and deceptive networks to prevent threat actors in the year of global elections. In an October report, the company behind ChatGPT said they “know it is particularly important to build robust, multi-layered defenses against state-linked cuber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms.” Threat actors are a group of people who intentionally cause harm in the cyber sphere. Since May, the company has continued to build new AI-powered tools that help them to detect and dissect potentially harmful activity. While threat actors have been noted, the AI research firm hasn’t seen “evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences.” OpenAI has disrupted activity that generated social media content about the elections in the United States, Rwanda, India, and the European Union. OpenAI remains on ‘high alert’ to detect and disrupt threat actors Since the beginning of the year, four separate networks that included at least some degree of election-related content have been disrupted.