Generative AI can have “hallucinations” when it doesn’t know the answer to a question; here’s how to spot it. Researchers from the University of Oxford have devised a new method to help users work out when generative AI could be “hallucinating.” This comes about when an AI system is posed a query that it doesn’t know the answer to, causing it to make up an incorrect answer. Luckily, there are tips to both spot this when it’s happening and prevent it from happening altogether. How to stop AI hallucinations A new study by the team at the University of Oxford has produced a statistical model that can identify when questions asked of generative AI chatbots were most likely to produce an incorrect answer. This is a real concern for generative AI models, as the advanced nature of how they communicate means they can pass off false information as fact.