OpenAI Moves To Block Disinformation In 2024 Elections

Updated on:
OpenAI logo and Sam Altman in background on screen

OpenAI has imposed restrictions on using its tools, including ChatGPT and Dall-E, during the upcoming 2024 election season. 

The move comes as a response to growing concerns about the potential manipulation of elections through artificial intelligence.

The company’s move comes as fears rise AI systems could be exploited to disseminate misinformation and influence voters in crucial races.

OpenAI said it prevents political campaigning, lobbying, and creating chatbots that mimic candidates or impersonate local governments. 

The company is also taking a stance against applications discouraging voting by claiming its futility, redirecting such queries to CanIVote.org, operated by the National Association of Secretaries of State.

To enhance transparency, OpenAI plans to encode details about the origin of images generated by Dall-E starting early this year. 

Additionally, the company aims to provide more information, including links and attribution to news reporting, enabling users to assess the reliability of generated text and images.

Other AI system producers globally, such as Google and Meta, have announced measures to mitigate potential risks associated with AI during elections. 

Google, for instance, recently announced restrictions on election-related queries to its AI chatbot, Bard.

Similarly, Facebook parent Meta prohibited campaigns from using AI advertising tools in the previous year.

But US politicians doubt tech companies’ ability to regulate their AI systems.

However, OpenAI’s CEO, Sam Altman, said during a congressional hearing in May that their chatbot is a “tool, not a creature,” noting that people have major control over it. 

Follow us on X, LinkedIn, and Facebook