Summary

  • Vectara has created a new service called the Vectara Hallucination Corrector, which reduces hallucinations in AI models.
  • The firm’s approach includes guardian agent software components that monitor and protect AI workflows.
  • Vectara aims to correct errors in workflows generated by LLMs while providing explanations for any changes made.
  • The system reduces hallucination rates for smaller language models under 7 billion parameters, to less than 1%.
  • Alongside the Vectara Hallucination Corrector, the company is releasing HCMBench, an open-source evaluation tool that helps to evaluate hallucination correction models.
  • The offering could enable enterprises to deploy AI in previously restricted use cases while maintaining accuracy standards.

By Sean Michael Kerner

Original Article