1 boosters for "llm-red-teaming" — open source, verified from GitHub, ready to install
A multi-agent red-teaming framework that orchestrates coordinated AI security testing with an arbiter to consolidate findings and maintain an immutable audit trail. Security engineers and AI developers use it to systematically test LLM vulnerabilities with repeatable, deterministic results.