AI SummaryThis booster automates reconnaissance of LLM API endpoints to identify models, authentication methods, and configuration details for security testing. Red team operators and security researchers benefit from structured enumeration workflows.
Install
Copy this and paste it into Claude Code, Cursor, or any AI assistant:
I want to install the "kali-ai-redteam" skill in my project. Please run this command in my terminal: # Install skill into the correct directory (6 files) mkdir -p .claude/skills/commands && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/SKILL.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/recon.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/extract.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/extract.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/jailbreak.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/jailbreak.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/probe.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/probe.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/report.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/report.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/commands/scan.md "https://raw.githubusercontent.com/mayflower/kali-ai-redteam/main/.claude/commands/scan.md" Then restart Claude Code (or reload the window in Cursor) so the skill is picked up.
Description
Enumerate target LLM API endpoints and model information
Reconnaissance Steps
• Endpoint Discovery • Identify API endpoints (chat, completions, embeddings) • Check for OpenAPI/Swagger documentation • Look for health/status endpoints • Authentication Analysis • Detect authentication method (API key, OAuth, session) • Check for exposed keys in responses or errors • Test rate limiting behavior • Model Identification • Detect model provider (OpenAI, Anthropic, local, custom) • Identify model version if possible • Check for model switching capabilities • Input/Output Mapping • Document request format (JSON structure, parameters) • Map response format and fields • Identify streaming vs batch modes • Configuration Exposure • Check for debug endpoints • Look for configuration leaks in errors • Test for verbose error messages Save all findings to /pentest/recon/ with timestamp. Create a summary file with key findings for next phase.
Discussion
Health Signals
My Fox Den
Community Rating
Sign in to rate this booster