Aimon Rely
Hallucination Detection Sandbox
Aimon Rely is a state-of-the-art, multi-model system for detecting LLM quality issues like hallucinations offline and online at 1/10th the cost of GPT4. LLMs are probablistic models that generate text. It is a common practice to provide them relevant context at query time. This detector tells you which sentences in the test text (think LLM generated text) are hallucinated when compared against the source of truth (context) provided.
2Enter Test Text