At the Munich Security Conference, major technology companies, including Microsoft, Meta, Google, Amazon, Adobe, and IBM, among others, have committed to combating election-related deepfakes. They signed an accord signaling their intention to adopt a common framework for responding to AI-generated deepfakes aimed at misleading voters. Joining them are thirteen other companies, including AI startups like OpenAI, Anthropic, Inflection AI, ElevenLabs, and Stability AI, along with social media platforms X (formerly Twitter), TikTok, and Snap, chipmaker Arm, and security firms McAfee and TrendMicro.
The signatories pledged to utilize methods to detect and label misleading political deepfakes when they emerge and circulate on their platforms. They also committed to sharing best practices with each other and providing swift and proportionate responses to the dissemination of deepfakes. Moreover, the companies emphasized the importance of considering context in responding to deepfakes to safeguard various forms of expression while maintaining transparency with users about their policies on deceptive election content.
While the accord’s measures are voluntary and may be criticized as mere virtue signaling, they underscore the tech sector’s wariness of regulatory scrutiny, especially concerning elections. With 49% of the world’s population expected to participate in national elections this year, the tech industry recognizes the need for collective action to address electoral abuses involving deepfakes.
Despite the absence of federal laws in the U.S. banning deepfakes, ten states have enacted statutes criminalizing their use, particularly in political campaigning. Federal agencies are also taking steps to combat the spread of deepfakes, with the FTC seeking to modify rules to cover all consumers, including politicians, and the FCC moving to outlaw AI-voiced robocalls.
In the European Union, regulatory efforts are underway to require clear labeling of all AI-generated content and to compel the tech industry to curb deepfakes through legislation such as the AI Act and the Digital Services Act.
Despite these regulatory efforts, deepfakes continue to proliferate, with data from Clarity showing a 900% increase in the creation of deepfakes year over year. Recent incidents, such as AI-generated robocalls impersonating U.S. President Joe Biden and audio recordings impersonating political candidates, highlight the growing threat of deepfakes to elections worldwide.
Public concern about the spread of misleading deepfakes is evident, with surveys showing that a majority of Americans express worry about the impact of AI tools on the dissemination of false information during election cycles. As the threat of deepfakes persists, collaborative efforts among tech companies and regulatory bodies remain crucial in addressing this evolving challenge.