3 boosters for "prompt-injection" — open source, verified from GitHub, ready to install
This project includes deepsafe-scan, a preflight security scanner for AI agent environments. When the user asks to audit, scan, or check security of their AI agent setup, skills, or MCP servers, run the scanner: For a full scan with LLM analysis (if OPENAIAPIKEY is set):
SkillGuard is a security reviewer for Claude/Cursor Skills that detects prompt injection, tool injection, data exfiltration, and unsafe automation risks. It's essential for developers and organizations installing or developing AI skills to ensure safe, policy-compliant code execution.
AgentTrust provides identity, trust verification, and secure orchestration for autonomous AI agents communicating with each other (A2A), with built-in protections against prompt injection and human-in-the-loop controls. Developers building multi-agent systems, especially those requiring security and auditability, benefit from its official A2A partnership and MCP integration.