“Deepfakes” Take Center Stage: Major Big Tech Companies, including Microsoft, Google, and Meta, sign accord to combat Election Misinformation as AI-generated content poses serious threats to global elections

San Francisco, California – Several major tech companies and startups have recently signed an accord to combat AI-generated election misinformation. The signatories include Microsoft, Meta, Google, Amazon, IBM, Adobe, and chip designer Arm, as well as AI startups OpenAI, Anthropic, and Stability AI, and social media companies like Snap, TikTok, and X.

The tech industry is gearing up for a year of elections across more than 40 countries, affecting over 4 billion people. With the rise of AI-generated content, concerns about election-related misinformation have heightened. Data from Clarity, a machine learning firm, shows a 900% increase in the creation of deepfakes, posing a serious challenge for the upcoming elections.

Detection and watermarking technologies used to identify deepfakes have struggled to keep up with the rapid advancement of AI-generated content. This has led to an urgent need for collaborative efforts to address the spread of deceptive content.

The recent accord follows the announcement of Sora, a new AI-generated video model developed by OpenAI. Sora operates similarly to OpenAI’s image-generation AI tool, DALL-E, allowing users to input a scene and receive a high-definition video clip in return. This further emphasizes the urgency of addressing the risks posed by AI-generated content.

By signing the accord, participating companies have committed to assessing model risks, seeking to detect and address the distribution of deceptive content on their platforms, and providing transparency on these processes to the public.

Kent Walker, Google’s president of global affairs, emphasized the importance of the accord, stating that “democracy rests on safe and secure elections.” Similarly, Christina Montgomery, IBM’s chief privacy and trust officer, stressed the need for concrete measures to protect people and societies from the amplified risks of AI-generated deceptive content in this crucial election year.

The tech industry’s collaborative effort to address the challenges posed by AI-generated content underscores the growing recognition of the impact and potential risks associated with deepfakes and other forms of misinformation in the context of elections. As the industry continues to innovate, the need for proactive measures to safeguard democratic processes remains paramount.