Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Scientists Say Fabricated AI Responses Could Lead to New Discoveries and Innovation
Hallucinations are considered one of the most worrisome flaws of emerging artificial intelligence technology. But some scientists see the tendency of AI and large language models to fabricate responses is tool for discovery in fields such as chemistry and pharmaceuticals.
See Also: Live Webinar | AI in the Spotlight: Exploring the Future of AppSec Evolution
AI hallucinations have already left their mark in fields including the legal profession, media outlets and city services by subjecting the unwitting public to misinformation and embarrassing mistakes such as fabricating legal precedents in court documents or publishing incorrect information in automated news feeds.
But David Baker, a professor at the University of Washington and a winner of the 2023 Nobel Prize in chemistry, sees AI hallucinations as happy accidents that could drive innovative thinking. Baker, whose work involves creating proteins that do not occur in nature, has become an unlikely champion of AI’s hallucinatory creativity.
Hallucinations are helping his team design over 10 million novel proteins, which have led to breakthroughs in medical research and the treatment of diseases such as cancer and Alzheimer’s. “Things are moving fast,” Baker said in an interview with The New York Times.
Baker’s perspective is part of a larger trend among scientists who see AI hallucinations as opportunities, using them as a springboard for new ideas.
Hallucinations have so far helped scientists envision novel molecules, proteins and drug compounds that may have otherwise never been considered. AI helps teams rapidly ideate new approaches to complex challenges.
James Collins, a biological engineering professor at MIT, is using AI to design new antibiotics, relying on AI-generated suggestions for entirely new molecular structures. “We’re exploring,” he said, referring to his team’s process of using AI to think outside the box. “We’re asking the models to come up with completely new molecules.”
The Nobel Prize committee also described Baker’s AI-aided initiative as “one imaginative protein creation after another.”
Getting Past the Stigma
But the acceptance of AI hallucinations in science is not without its complexities. Many view hallucinations as a reminder that generative AI technology is still in its infancy and that developers don’t fully understand the problem.
Some researchers, including Dr. Anima Anandkumar of Caltech, avoid the term “hallucination,” preferring to describe these outputs as “creative” or “prospective” rather than illusory. The distinction is important because it frames the outputs as possibilities to explore rather than falsehoods to be dismissed.
According to Anandkumar, AI hallucinations can aid in designing new medical devices. Anandkumar and her team used AI to design a new kind of catheter aimed at reducing bacterial contamination, a major cause of urinary tract infections. The AI model they used generated thousands of potential designs, many of which had never been considered by human engineers. The result was a breakthrough.
But the scientific community remains divided on the broader implications of AI hallucinations. Some researchers, like Georgia Tech’s Santosh Vempala, argue that hallucinations are unavoidable due to the probabilistic nature of large language models: AI models strive for general accuracy, but often fail when confronted with questions or scenarios that lack sufficient data.
This is particularly true for tasks that demand factual precision, such as legal or medical applications. The stakes are high in these fields, and even small inaccuracies can have devastating consequences. A well-known example is of a case in which attorneys submitted a legal brief with fabricated legal precedents generated by ChatGPT, resulting in sanctions and ridicule.
Hallucinations and Creativity
While AI hallucinations can be a hindrance in situations in which accuracy is critical, they can potentially spark creativity in other areas. For instance, AI hallucinations are finding a role in artistic endeavors, where the boundary between fact and fiction can be fluid.
Some AI researchers, such as Vectara’s Amin Ahmad, argue that generative models should retain their ability to hallucinate in certain contexts, particularly when brainstorming new ideas or creating art. “LLMs should be capable of producing things without hallucinations,” Ahmad said, “but then we can flip them into a mode where they can produce hallucinations and help us brainstorm.”
This creative potential is also seen in video game design, marketing and content generation, where AI can be used to generate new storylines, characters and dialogue that would not have emerged through traditional brainstorming methods. In research, AI’s ability to present new combinations or novel perspectives on existing data can lead to discoveries that might otherwise remain hidden.
The debate surrounding AI hallucinations is far from settled, but its ability to “dream up” new ideas can be a powerful tool – if users can balance the creative potential with the need for accuracy.