Giveaway: SUBSCRIBE our youtube channel to stand a chance to win an iPhone 17 Pro

OpenAI Finds Why AI Hallucinates



Since its launch three years ago, ChatGPT has had issues displaying incorrect facts. The best example I have is that I am the founder of a competitor's tech site, even though I have never worked at the publishing company. This is an issue of hallucinations that occurs in all artificial intelligence (AI) models and why it has been happening for so long is a mystery.


Researchers at OpenAI published a paper last weekend with the hypothesis that hallucinations occur because the pre-training process of AI models focuses on getting the model to correctly predict the next word regardless of whether it is right or wrong. The model is trained to give answers of any kind and is not encouraged to say I don't know.


In the evaluation of the model, the AI ​​is given high marks for correct answers. In situations where it has to answer a question, the AI ​​will choose to guess the unknown answer with a probability of being correct rather than choosing not to answer and get a score of zero points.


Researchers are now proposing that in the evaluation of the model, incorrect answers will receive negative marks while incorrect answers given confidently due to hallucinations are given higher marks deductions. Half a point will be awarded if it admits not knowing. This gives the AI ​​model being trained an incentive to give the right answer all the time.


In the future, the problem of hallucinations may be reduced with a better AI model training system.

Previous Post Next Post

Contact Form