paint-brush
How to Detect and Minimise Hallucinations in AI Modelsby@psonara
41,327 reads
41,327 reads

How to Detect and Minimise Hallucinations in AI Models

by Parth SonaraJuly 25th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

While it is evident that machine learning algorithms are able to solve more challenging requirements, they are not yet perfect.
featured image - How to Detect and Minimise Hallucinations in AI Models
Parth Sonara HackerNoon profile picture

Stories of wider successful implementations of AI tools are published nearly every day. With ChatGPT, Midjourney and other models now available to the broad public, a greater number of people are starting to rely on AI in their daily lives.

While it is evident that machine learning algorithms are able to solve more challenging requirements, they are not yet perfect. Frequent hallucinations of artificial intelligence make them not the most reliable substitute for humans. And while for an ordinary user an AI error is just a glitch to laugh at, for business processes, such unpredictability can lead to consequences - from loss of client trust to lawsuits.


Some countries have begun drafting regulations around AI models to provide a framework around usage and applicability. Let’s figure out why and how neural networks start to hallucinate and how this can be minimised.

What is an AI hallucination?

Though sometimes it is not possible to identify the cause of an AI error, often, hallucinations result from how generative systems create text. When responding to a user's query, the AI suggests a likely set of words based on an array of previous data. The likelihood that some words follow others is not a very reliable way of making sure the final sentence is accurate. AI can piece together terms that may sound plausible but are not necessarily accurate—and to a human eye may look like complete nonsense. An example where I struggled with is asking ChatGPT for examples of countries having matching and non-matching settlement markets. While it was able to provide ‘Continuous Net Settlement’ (CNS) as an example of a matching settlement system, I was interested in the country the system was in (United States in this case) and the prompt provided the wrong output in this case.


At times however, detecting an AI hallucination can be more tricky. While some errors are obvious, others are more subtle and may go unnoticed, especially when the output is processed automatically or handled by a person with a limited expertise in the field. Undetected AI issues can lead to unforeseen and unwanted consequences. This is especially true in areas where it is critical to have accurate and reliable information. In addition, typically the more specialised a prompt, the accuracy of the AI model may vary due to the lack of supporting collateral it may refer to. The CNS example above is again a great example; I was unable to find a list of countries through a Google search and hoped ChatGPT could provide a consolidated list but faced a similar hurdle with the latter.


These are the common types of issues that occur due to AI hallucinations:

  1. Inaccurate decision making: AI hallucinations can lead to incorrect decisions and diagnoses, especially in fields where accuracy is critical, such as healthcare or information security, and being detrimental to people and businesses alike.


  1. Discriminatory and offensive results: Hallucinations can lead to the generation of discriminatory or offensive results, which can damage an organisation's reputation and cause ethical and legal issues.


  1. Unreliable analytics: If AI generates inaccurate data, it can lead to unreliable analytical results. Organisations may make decisions based on incorrect information, and the outcomes can be costly. Sometimes the data can be out of date; a great example is ChatGPT’s free version, which only carries data until 2022 and therefore numbers gleaned from it may be unreliable.


  2. Ethical and legal concerns: Due to hallucinations, AI models can reveal sensitive information or generate offensive content, leading to legal issues. Walled gardens can mitigate some risks with sensitivity.


  3. Misinformation: Generating false information can cause various problems to companies and end users such as breaking trust, harming or negatively affecting the public opinion.


    Why do LLMs hallucinate?

    AI hallucinations are a complex problem, and their causes are not fully clear to users and developers alike. Here are a few key factors that may cause or contribute to such hallucinations:


    • Incomplete or biased training data. If the training dataset is limited and/or the prompt does not fully cover possible scenarios, the model may not adequately respond to queries. If the data used to train the AI contains biases, model outputs will also have such biases.


    • Overtraining and lack of context. Models overtrained on specific data can lose the ability to respond appropriately to new, unforeseen situations, especially if they lack contextual information. It is recommended to split the dataset into 3 types - training data, validation data, and testing data; this split ensures that the model performs well both within the testing model but also the out of sample data.


    • Misunderstood or inappropriate model parameter sizes. Improperly sized model parameters can lead to unpredictable AI behaviour, especially in case of complex queries or unusual situations.


    • Unclear prompts. On the user-facing side, ambiguous or overly general user queries can result in unpredictable or irrelevant responses.

How to avoid hallucinations?

It’s important to remember that LLMs work like a “black box”—not even data scientists can fully follow the generation process and predict the output. This is why it’s not possible to 100% safeguard your business from AI hallucinations. At the moment, companies that use AI models need to focus on preventing, detecting and minimising AI hallucinations. These are some tips for maintaining the “hygiene” of the ML models:


  • Thoroughly clean and prepare the data used for training and tuning AI models. This involves not only removing irrelevant or erroneous information, but also ensuring that the data is diverse and represents different perspectives.


  • Be mindful of the size and complexity of your AI model. Many companies are striving for larger and more complex artificial intelligence models in order to increase their capabilities. However, this can also lead to a model oversaturation and make interpreting and explaining its work a challenge even for the developers themselves.


To avoid these uncertainties and confusion from the start, it's important to plan the development of AI models by emphasising their interpretability and explainability. This means documenting your model building processes, maintaining transparency with key stakeholders, and choosing an architecture that makes it easy to interpret and explain model performance despite growing data volumes and user requirements. This will also assist in regulatory requirements, as the field gets scrutinised by governments.


  • Apply thorough testing.  AI model testing should not only include standard queries and handle common input formats, but also analysis of its behaviour under extreme conditions and when processing complex queries. Testing the AI's response to a wide range of inputs can predict how the model will behave in a variety of situations. This may help improve the data and model architecture before users encounter inaccurate results.


  • Keep a human element in the verification process. This can be key in identifying nuances that may escape the attention of automated checks. People engaged in this task should have a balanced set of skills and experience in AI and technology, customer service, and compliance.


  • Another key element is to regularly gather feedback from end users, especially after the model has already been implemented and is in active use. Users of AI models may provide valuable insights about AI hallucinations and other deviations. To make this process effective, it is important to create convenient and accessible feedback channels.


  • It is important to monitor and update AI models on a regular basis to maintain their effectiveness. These improvements should be made based on user feedback, team research, current industry trends, and performance data from QA and monitoring tools. Continuous monitoring of the model performance and active improvement based on the analytical information gathered can significantly reduce the risk of hallucinations.


If you don’t use the AI model to handle sensitive information, you can try to apply search augmented generation to reduce the risk of hallucinations. Instead of relying only on existing training data and context from the user, the AI will search for relevant information online. However, this technology hasn’t shown very reliable results yet. The output of the unfiltered search can sometimes be just as untrue as an AI model hallucination.


Are hallucinations always bad?


The relationship between hallucinations and creativity in AI systems seems similar to the processes in the human imagination. Humans often come up with creative ideas by letting their minds wander beyond reality.


The AI models that generate the most innovative and original results also tend to sometimes create content that is not based on real facts.Some experts believe that getting rid of hallucinations completely could harm creative content creation.


However, it is important to understand that such types of output often lack a factual basis and logical thought, making them unsuitable for fact-based tasks.