Anthropic has introduced new election policies for its AI-powered chatbot Claude ahead of the US election to prevent misinformation.

This will include an update that prevents Claude from answering political questions asked by US users. 

Anthropic has already banned the use of Claude in political campaigning and lobbying in its acceptable use policy. The company has now introduced an automated system to detect when Claude could be being used in campaigns. 

Users who are flagged by this system could risk a permanent ban from using the chatbot. 

US-based users will also be directed towards TurboVote if they ask Claude for voting information.

TurboVote is a non-partisan website that can help US citizens register to vote and provide advice on different modes of voting. It is provided by the non-profit Democracy Works. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

In its blog post announcing the changes to Claude, Anthropic stated that it intended to roll out similar features to users in other countries. 

AI chatbots are able to generate responses to user prompts based on the information their large language model (LLM) have learnt through data; this means that their answers are solely dependent on the training data that they have ingested.

One major problem facing LLMs and AI chatbots is that they can generate answers that are not factually true, and these can be hard to spot by users. These are called hallucinations. 

“While generative AI systems have a broad range of positive uses, our own research has shown that they can still be prone to hallucinations,” states Anthropic. “Our model is not trained frequently enough to provide real-time information about specific elections,” it added.

Anthropic also stated that it was unable to accurately determine the impact of AI on elections. 

The company said the history of AI deployment has been full of surprises and unexpected effects.

“We expect that 2024 will see surprising uses of AI systems – uses that were not anticipated by their own developers,” Anthropic said.