1 boosters for "llm-red-teaming" — AI-graded, open source, ready to install
Multi‑agent AI security testing framework that orchestrates red‑team analyses, consolidates findings with an arbiter, and records an immutable audit ledger—plus a deterministic demo mode for repeatable results.