Home
News
Tech Grid
Data & Analytics
Data Processing Data Management Analytics Data Infrastructure Data Integration & ETL Data Governance & Quality Business Intelligence DataOps Data Lakes & Warehouses Data Quality Data Engineering Big Data
Enterprise Tech
Digital Transformation Enterprise Solutions Collaboration & Communication Low-Code/No-Code Automation IT Compliance & Governance Innovation Enterprise AI Data Management HR
Cybersecurity
Risk & Compliance Data Security Identity & Access Management Application Security Threat Detection & Incident Response Threat Intelligence AI Cloud Security Network Security Endpoint Security Edge AI
AI
Ethical AI Agentic AI Enterprise AI AI Assistants Innovation Generative AI Computer Vision Deep Learning Machine Learning Robotics & Automation LLMs Document Intelligence Business Intelligence Low-Code/No-Code Edge AI Automation NLP AI Cloud
Cloud
Cloud AI Cloud Migration Cloud Security Cloud Native Hybrid & Multicloud Cloud Architecture Edge Computing
IT & Networking
IT Automation Network Monitoring & Management IT Support & Service Management IT Infrastructure & Ops IT Compliance & Governance Hardware & Devices Virtualization End-User Computing Storage & Backup
Human Resource Technology Agentic AI Robotics & Automation Innovation Enterprise AI AI Assistants Enterprise Solutions Generative AI Regulatory & Compliance Network Security Collaboration & Communication Business Intelligence Leadership Artificial Intelligence Cloud
Finance
Insurance Investment Banking Financial Services Security Payments & Wallets Decentralized Finance Blockchain Cryptocurrency
HR
Talent Acquisition Workforce Management AI HCM HR Cloud Learning & Development Payroll & Benefits HR Analytics HR Automation Employee Experience Employee Wellness
Marketing
AI Customer Engagement Advertising Email Marketing CRM Customer Experience Data Management Sales Content Management Marketing Automation Digital Marketing Supply Chain Management Communications Business Intelligence Digital Experience SEO/SEM Digital Transformation Marketing Cloud Content Marketing E-commerce
Consumer Tech
Smart Home Technology Home Appliances Consumer Health AI
Interviews
Think Stack
Press Releases
Articles
Resources
  • Home
  • /
  • News
  • /
  • AI
  • /
  • Agentic AI
  • /
  • Vectara Launches Hallucination Corrector for More Reliable AI Agents
  • Agentic AI

Vectara Launches Hallucination Corrector for More Reliable AI Agents


Vectara Launches Hallucination Corrector for More Reliable AI Agents
  • Source: Source Logo
  • |
  • June 19, 2025

Vectara, a platform specializing in enterprise Retrieval-Augmented Generation (RAG) and AI-powered agents, has introduced its Hallucination Corrector. This novel feature, integrated as a "guardian agent" within the Vectara platform, builds upon the company's expertise in detecting and mitigating hallucinations in enterprise AI systems. The Hallucination Corrector not only identifies inaccuracies but also provides detailed explanations and multiple options for correcting them, aiming to enhance the reliability and accuracy of AI agents and assistants. This capability will initially be available as a tech preview for Vectara customers.

Quick Intel

  • Vectara launches Hallucination Corrector, a first-of-its-kind "guardian agent."
  • It automatically corrects hallucinations in AI agent responses.
  • Provides explanations for identified inaccuracies and offers corrected versions.
  • Significantly reduces hallucination rates for smaller LLMs (under 7B parameters).
  • Can be used with Vectara's Hughes Hallucination Evaluation Model (HHEM).
  • Vectara also released an open-source Hallucination Correction Benchmark.

Addressing the Critical Challenge of AI Hallucinations

Amr Awadallah, Founder and CEO of Vectara, emphasized the importance of overcoming the "trust deficit" created by hallucinations in large language models (LLMs). He stated that while LLMs have made progress, their accuracy still falls short for highly regulated industries. Vectara's Hallucination Corrector is designed to address this challenge, providing organizations with a powerful new tool to achieve unprecedented levels of accuracy and realize the full benefits of AI.

A Guardian Agent for Enhanced Workflow Reliability

As a guardian agent, the Hallucination Corrector actively works to safeguard agentic workflows. It has demonstrated the ability to consistently reduce hallucination rates in smaller LLMs (those with fewer than 7 billion parameters, commonly used in enterprise AI) to below 1%. This level of accuracy reportedly matches that of leading models from Google and OpenAI.

Complementary Use with Hallucination Evaluation Model

The Hallucination Corrector can also be used in conjunction with Vectara's Hughes Hallucination Evaluation Model (HHEM), which has garnered significant adoption within the AI community. The HHEM works by comparing AI-generated responses against their source documents to pinpoint any unsupported or inaccurate statements. The Hallucination Corrector then builds upon this by providing a two-part output: a clear explanation of why a statement is considered a hallucination and a corrected version of the summary that incorporates only the necessary changes for accuracy.

Flexible Integration Options for Developers

The structured output provided by the Hallucination Corrector offers developers various ways to integrate hallucination correction into their applications and agentic workflows, depending on the specific use case. These options include: seamlessly using the corrected output for end-users, displaying full explanations alongside suggested fixes for testing, highlighting changes in the corrected summary with on-demand explanations, flagging potential issues in the original summary while offering the corrected version as an option, and refining misleading responses to reduce uncertainty.

New Open-Source Benchmark for Industry Standardization

Alongside the launch of the Hallucination Corrector, Vectara has also released a new open-source Hallucination Correction Benchmark. This benchmark provides the broader AI industry with a standardized toolkit for evaluating the performance of the Vectara Hallucination Corrector. This initiative underscores Vectara's commitment to transparency and aims to establish objective metrics for progress in the critical area of hallucination mitigation.

Eva Nahari, Chief Product Officer at Vectara, highlighted Vectara's role in the industry-wide effort to build reliable and trustworthy AI applications. She stated that the new Hallucination Corrector is a significant step forward in this mission, further enhancing the quality of AI applications built on the Vectara platform. Vectara plans to continue expanding its platform with additional guardian agents to help organizations safely adopt and leverage the power of generative AI while mitigating the risks associated with its limitations.

 

About Vectara

Vectara provides an enterprise-grade platform for building AI Assistants and Agents with extraordinary accuracy. As an end-to-end Retrieval Augmented Generation (RAG) service, deployed on-prem, in VPC, or utilized as a SaaS, Vectara delivers the shortest path to a correct answer/action while mitigating hallucinations and providing high-precision results. Vectara provides secure and granular access controls and comprehensive explainability, allowing companies to avoid risks and provide iron-clad data protection. 

News Disclaimer
  • Share