Google announced on Tuesday (19 Sept) new features for its Bard generative artificial intelligence (AI) tool, including fact-checking capabilities and analysis of users’ personal data.

The host of new features also includes a method to check and remove ‘hallucinations’. An AI hallucination is the scenario in which a large language model (LLM) makes up false information or facts that aren’t based on real data.

Bard’s new hallucination feature will identify parts of the text which do not match Google search results.

ChatGPT’s overwhelming success following its release in November 2022 spawned a wave of LLMs as the tech sector raced to make AI available to customers.

ChatGPT has remained one of the fastest-growing AI tools, but its popularity has flatlined in recent months.

The tool, created by OpenAI, saw a monthly decline in website visits for the third month in a row – despite generative AI continuing to grow in popularity and investment this year. 

According to data supplied by SimilarWeb, ChatGPT’s website visits on desktop and mobile dropped by 3.2% in August. In June and July, the chatbot saw an average decline of around 10%.

With Google looking to gain more traction in the AI sphere, one of the new Bard features, Bard Extensions, allows users to import data from other Google sites, for example, Gmail or Google Drive.