AI21 Labs recently launched “Contextual Answers”, a question-and-answer engine for large language models (LLM).
When connected to an LLM, the new engine allows users to upload their own data libraries to limit model outputs to specific information.
The launch of ChatGPT and similar artificial intelligence (AI) products has changed the paradigm for the AI industry, but a lack of reliability makes adoption difficult for many companies.
According to research, employees spend almost half of their working days searching for information. This represents a huge opportunity for chatbots capable of performing search functions; however, most chatbots are not suitable for businesses.
AI21 developed Contextual Answers to bridge the gap between chatbots designed for general use and enterprise-level question-and-answer services by giving users the ability to channel their own libraries of data and documents.
According to an AI21 blog post, contextual responses allow users to direct AI responses without retraining models, thereby mitigating some of the biggest barriers to adoption:
“Most companies find it difficult to adopt [AI]citing the cost, complexity, and lack of specialization of models in their organizational data, leading to incorrect, “hallucinated,” or context-inappropriate responses. »
One of the major challenges in developing useful LLMs, such as OpenAI’s ChatGPT or Google’s Bard, is teaching them to express distrust.
Typically, when a user queries a chatbot, it generates a response even though there is not enough information in its dataset to provide factual information. In these cases, rather than producing an unreliable answer such as “I don’t know”, LLMs will often make up information without any factual basis.
The researchers call these results “hallucinations” because the machines generate information that apparently doesn’t exist in their datasets, like humans seeing things that aren’t really there.
We are excited to introduce Contextual Answers, an API solution where answers are based on organizational knowledge, leaving no room for AI hallucinations.
➡️ https://t.co/LqlyBz6TYZ pic.twitter.com/uBrXrngXhW
— AI21 Laboratories (@AI21Labs) July 19, 2023
According to A121, contextual responses should mitigate the hallucination problem entirely either by producing information only when relevant to the user-provided documentation or by producing nothing at all.
In industries where precision is more important than automation, such as finance and legal, the advent of Generative Preformed Transformer (GPT) systems has had mixed results.
Experts continue to recommend caution in finance when using GPT systems due to their tendency to hallucinate or confuse information, even when connected to the Internet and able to connect to sources. And in the legal sector, a lawyer now faces fines and penalties after relying on the results generated by ChatGPT during a case.
By loading AI systems with relevant data and intervening before the system can hallucinate non-factual information, AI21 appears to have demonstrated an alleviation of the problem of hallucinations.
This could lead to mass adoption, especially in the fintech space, where traditional financial institutions have been reluctant to adopt GPT technology, and the cryptocurrency and blockchain communities have had mixed success at best employing chatbots.
Related: OpenAI launches “custom instructions” for ChatGPT so users don’t have to repeat themselves on every prompt