Observe.ai Unveils 30 Billion Parameter Contact Center LLM and Generative AI Product Suite

Join senior executives in San Francisco on July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more

Conversational intelligence platform Observe.ai today introduced its Large Language Contact Center (LLM) model, with a capacity of 30 billion parameters, along with a generative AI suite designed to improve agent performance. The company says that unlike models like GPT, its proprietary LLM is trained on a large dataset of real-world contact center interactions.

Although a few similar offerings have been announced recently, Observe.ai pointed out that the distinguishing value of its model lies in the calibration and control it offers users. The platform allows users to fine-tune and customize the template to suit their specific contact center needs.

The company said its LLM had undergone specialized training on multiple contact center datasets, equipping it to handle various AI-based tasks (call summarization, automated QA, coaching, etc.) customized for customers. contact center teams.

Using LLM capabilities, Observe.ai’s generative artificial intelligence suite works to improve agent performance in all customer interactions: phone calls and chats, queries, complaints, and day-to-day conversations handled by agents. contact center teams.


Transform 2023

Join us in San Francisco on July 11-12, where senior executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

Register now

Observe.AI believes these features will enable agents to provide a better customer experience.

“Our LLM underwent extensive training on a dataset specific to the field of contact center interactions. The training process involved using a substantial body of data points extracted from the hundreds of millions of conversations that Observes .ai has been processed over the past five years,” Swapnil Jain, CEO of Observe.AI, told VentureBeat.

Jain highlighted the importance of quality and relevance in the instructions dataset, which included hundreds of instructions organized for various tasks directly applicable to contact center use cases.

This meticulous approach to curating datasets, he said, has enhanced the LLM’s ability to provide the precise, context-specific answers the industry needs.

According to the company, its LLM contact center outperformed GPT-3.5 in the initial benchmarks, showing a 35% increase in accuracy in conversation summarization and a 33% improvement in sentiment analysis. Jain said those numbers should improve further with continued training.

Additionally, the LLM has undergone training exclusively on redacted data, ensuring the absence of personally identifiable information (PII). Observe.AI highlights its use of copywriting techniques to prioritize customer data privacy while leveraging generative AI capabilities.

Eliminate hallucinations to provide accurate information and context

According to Jain, the widespread adoption of generative AI has prompted around 70% of companies in various industries to explore its potential benefits, especially in areas such as customer experience, loyalty and revenue growth. Contact center leaders are among the eager adopters eager to take advantage of these transformative technologies.

However, despite their promise, Jain believes that generic LLMs face challenges that hinder their effectiveness in contact centers.

These challenges include a lack of specificity and control, an inability to distinguish between correct and incorrect responses, and limited skill in understanding human conversation and real-world contexts. Therefore, he said these generic models, including GPT, often produced inaccuracies and fabrications, also known as “hallucinations”, making them unsuitable for professional environments.

“The generic models are trained on open Internet data. Therefore, these models do not learn the nuances of spoken human conversation (think disfluencies, repetitions, broken sentences, etc.) and also deal with transcription errors due to text-to-speech models,” Jain said. . “So they can be useful for general tasks like summarizing a conversation, but lack the relevant context for conversations within the contact center.”

Jain explained that his company overcame these challenges by incorporating five years of well-processed and relevant data into its model. He gathered this data from hundreds of millions of customer interactions to train the model on specific contact center tasks.

“We have a nuanced and precise understanding of what ‘successful’ customer experiences look like in real-world settings. Our customers can then refine and tailor this to their unique business needs,” Jain said. “Our approach provides a comprehensive framework for contact centers to calibrate the machine and verify that the actual results match their expectations. This is the nature of a “glass box” AI model that provides full transparency and engenders trust in the system. »

The company’s new generative AI suite empowers agents throughout the customer interaction lifecycle, he added.

Knowledge AI functionality facilitates quick and accurate responses to customer inquiries by eliminating manual searches of numerous internal knowledge bases and FAQs; while the auto-summarize feature allows agents to focus on the customer, reducing post-call work while ensuring the quality and consistency of call notes.

The Auto Coaching tool provides agents with personalized, evidence-based feedback immediately after the conclusion of a customer interaction. This facilitates upskilling and aims to enhance the learning experience for agents, complementing their regular supervisor-based coaching sessions.

Observe.ai claims that GPT’s overtaking of its proprietary model in terms of consistency and relevance marks a significant step forward.

“Our LLM only trains on data that is completely stripped of all sensitive customer information and PII. Our redaction credentials in this regard are exemplary for the industry – we avoid excessive redaction of sensitive information in 150 million instances on 100 million calls with less than 500 reported errors,” explained Jain. “This ensures sensitive information is protected and privacy and compliance are maintained while retaining maximum information for LLM training.”

He also said the company has implemented a robust data protocol to store all customer data, including data generated by the LLM, in full compliance with regulatory requirements. Each client/account is assigned a dedicated storage partition, ensuring data encryption and unique identification for each client/account.

Jain said we are witnessing a pivotal moment amid the blossoming of generative AI. He pointed out that the contact center industry is full of repetitive tasks and believes that generative AI will allow human talents to perform their work with remarkable efficiency and speed, exceeding their current capabilities tenfold.

“I think successful disruptors in this industry will focus on creating fully controllable generative AI; trustworthy with full visibility into results; and secure,” Jain said. “We are focused on building trustworthy, reliable and consistent AI that ultimately helps human talent do their job better. Our goal is to create AI that allows humans to focus more on creativity, strategic thinking, and creating positive customer experiences.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Discover our Briefings.

Leave a Comment