Skip to content

dhanushk-offl/hallx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

hallx

hallx logo

Lightweight hallucination-risk scoring for production LLM pipelines

Tests Release OpenSSF Scorecard PyPI Downloads / month Python License

What Is Hallx

Hallx is a practical guardrail layer that evaluates LLM responses before they are trusted in downstream systems.

It scores responses using:

  • schema validity
  • consistency across repeated generations
  • grounding against provided context

It returns:

  • confidence
  • risk_level
  • issues
  • recommendation

Quick Start

pip install hallx
from hallx import Hallx

checker = Hallx(profile="balanced")
result = checker.check(prompt="p", response="r", context=["c"])
print(result.confidence, result.risk_level, result.recommendation)

Workflow

Hallx workflow

Essential Links

Languages

Community

About

Lightweight, model-agnostic hallucination risk engine for LLM outputs

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages