1 boosters for "evaluation-framework" — AI-graded, open source, ready to install
Promptfoo is an LLM evaluation and testing toolkit that enables developers to systematically test, benchmark, and validate LLM prompts and RAG systems. It's essential for teams building production LLM applications who need confidence in prompt quality and model behavior.