by Robyn Repko Waller

AI has a proclivity for exaggeration. This hallucination is integral to its success and its danger.
Much digital ink has been spilled and computational resources consumed as of late in the too rapidly advancing capacities of AI.
Large language models like GPT-4 heralded as a welcome shortcut for email, writing, and coding. Worried discussion for the implications for pedagogical assessment — how to codify and detect AI plagiarism. Open-AI image generation to rival celebrated artists and photographers. And what of the convincing deep fakes?
The convenience of using AI to innovate and make efficient our social world and health, from Tiktok to medical diagnosis and treatment. Continued calls, though, for algorithmic fairness in the use of algorithmic decision-making in finance, government, health, security, and hiring.
Newfound friends, therapists, lovers, and enemies of an artificial nature. Both triumphant and terrified exclamations and warnings of sentient, genuinely intelligent AI. Serious widespread calls for a pause in development of these AI systems. And, in reply, reports that such exclamations and calls are overblown: Doesn’t intelligence require experience? Embodiment?
These are fascinating and important matters. Still, I don’t intend to add to the much-warranted shouting. Instead, I want to draw attention to a curious, yet serious, corollary of the use of such AI systems, the emergence of artificial or machine hallucinations. By such hallucinations, folks mean the phenomenon by which AI systems, especially those driven by machine learning, generate factual inaccuracies or create new misleading or irrelevant content. I will focus on one kind of hallucination, the inherent propensity of AI to exaggerate and skew. Read more »