Skill

saferun

by Cocabadger

AI Summary

SafeRun provides automatic safety guardrails for AI agents by classifying shell commands as BLOCK, ASK, or ALLOW before execution, preventing dangerous operations like force pushes and recursive deletes. Developers and AI systems that execute shell commands benefit from this protection without needing manual configuration.

Description

Safety guardrails for AI agents. Classifies shell commands as BLOCK, ASK, or ALLOW before execution. Prevents dangerous operations like force pushes, recursive deletes, and credential destruction. Works automatically — no configuration needed.

Install

# Add to your project root as SKILL.md
curl -o SKILL.md "https://raw.githubusercontent.com/Cocabadger/saferun-openclaw/main/SKILL.md"

Quality Score

C

Acceptable

62/100

Standard Compliance65
Documentation Quality62
Usefulness78
Maintenance Signal100
Community Signal0
Scored Today

GitHub Signals

Issues0
Updated20d ago
View on GitHub

Trust & Transparency

Open Source — MIT

Source code publicly auditable

Verified Open Source

Hosted on GitHub — publicly auditable

Actively Maintained

Last commit 20d ago

My Fox Den

Community Rating

Works With

Claude Code