The Weird World of AI Hallucinations
When someone sees something that isn't there,关键字3 people often refer to the experience as a hallucination. Hallucinations occur when your sensory perception does not correspond to external stimuli. Technologies that rely on artificial intelligence can have hallucinations, too.
When an algorithmic system generates information that seems plausiblebut is actually inaccurate or misleading, computer scientists call it an AI hallucination.
Editor's Note:
Guest authors Anna Choi and Katelyn Xiaoying Mei are Information Science PhD students. Anna's work relates to the intersection between AI ethics and speech recognition. Katelyn's research work relates to psychology and Human-AI interaction. This article is republished from The Conversation under a Creative Commons license.
Researchers and users alike have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.
Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed.
But in other cases, the stakes are much higher.
At this early stage of AI development, the issue isn't just with the machine's responses – it's also with how people tend to accept them as factual simply because they sound believable and plausible, even when they're not.
We've already seen cases in courtrooms, where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient's eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: autonomous vehicles use AI to detect obstacles: other vehicles and pedestrians.
Making it up
Hallucinations and their effects depend on the type of AI system. With large language models, hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant.
A chatbot might create a reference to a scientific article that doesn't exist or provide a historical fact that is simply wrong, yet make it sound believable.
In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.
With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image.
Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.
What causes hallucinations
Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.
Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.
When a system doesn't understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.
It's important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired.
Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.
The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required. To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.
What's at risk
The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: an autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger.
For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.
As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.
Check AI's Work – Don't Trust – Verify AI
Regardless of AI companies' efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy.
Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.
-
李盈莹因伤无缘2025年世界女排联赛 将在世锦赛复出文学最经典的十本书 2025国内外名著推荐面包含有70只蟋蟀 芬兰推出重口味蟋蟀面包世预赛激战!专家辉红论球豪取17连红领衔预测辜海燕、齐勇凯将担任巴黎残奥会开幕式中国代表团旗手Những khu vực nào có thể bị ảnh hưởng bởi bão số 1 sắp hình thành?荣程钢联(山西)钢铁有限公司喉咙拔剑 姐妹对决 《灵魂摆渡人》开发商新作发布预告Mùa hè sôi động của học sinh Hoa phượng đỏApple iMac: The Computer That Saved the Company
- ·在家也可做出香滑的西式浓汤:奶油蘑菇汤
- ·第四届美标“梦想浴室”设计大赛六城沙龙即将启幕
- ·口袋妖怪秩序的崩坏生存手册是什么
- ·筑路径、传爱心、育青才,欧莱雅集团与京东于第三届上海国际碳中和博览会签署可持续战略合作协议
- ·宿州市:让文化力量赋能基层治理
- ·2024年京津冀网球俱乐部团体积分赛天津站成功举办
- ·4月21日今天92/95号汽油最新价格表 4月30日油价调整最新消息
- ·我,维度魔神,在线拯救世界
- ·中国马术协会马术星级专业教练员培训考核项目(陕西站)顺利举行
- ·Những khu vực nào có thể bị ảnh hưởng bởi bão số 1 sắp hình thành?
- ·2024年京津冀网球俱乐部团体积分赛天津站成功举办
- ·吃辣条患罕见病暴瘦20斤 11岁女孩因为辣条透支生命
- ·《忍者龙剑传4》Steam预购开启 标准版298元豪华版398元
- ·口袋妖怪空之花神【补完版】常见问题和特色有哪些
- ·Introducing Call of Duty: Mobile Season 4 — Infinity Realm
- ·Return to Verdansk: Massive Intel Drop Ahead of Call of Duty: Warzone Season 03.
- ·Wes Anderson on the personal inspirations for 'The Phoenician Scheme'
- ·物业客服人员工作总结
- ·Best robot vacuum deal: Eufy Omni C20 robot vacuum and mop at record
- ·RICOCHET Anti
- ·windows 任务栏7个常用图标丢失的解决办法
- ·买的橙子没试吃的甜 原因竟是出现在小贩的刀上...
- ·欧国联葡萄牙捧杯 足彩任九爆42.7万注22元
- ·欧国联葡萄牙捧杯 足彩任九爆42.7万注22元
- ·骗子酒馆和好友怎么联机 骗子酒馆联机方法教程一览
- ·商务部新闻发言人就出口管制管控名单答记者问
- ·CPU Price Watch: 9900K Incoming, Ryzen Cuts
- ·口袋妖怪秩序的崩坏生存手册是什么
- ·项目经理年度考核个人工作总结
- ·2024年中标合同总额超9.6亿,环境环卫业务稳步回升
- ·21 College and University Museums
- ·荣程钢联(山西)钢铁有限公司
- ·Will the Milky Way and Andromeda crash? Now scientists aren't so sure.
- ·日产Formula E车队在ABB国际汽联电动方程式锦标赛上海站斩获双积分
- ·天天下雨很烦发朋友圈的文案 雨天心情不好的短句最新
- ·疾光电影:法师PK中的秘密武器