8 boosters for "guardrails" — open source, verified from GitHub, ready to install
An intelligent API performance governor that autonomously optimizes system execution while preventing cost overruns and security breaches through strict guardrails. Ideal for developers managing cloud APIs and ML systems where runaway costs are a critical concern.
LC-StudyLab provides Cursor IDE rules and documentation links for LangChain v1.0, enabling developers to quickly reference core components and advanced features while coding with LangChain's full ecosystem.
"name": "ultraship", "description": "Claude Code plugin — 36 tools, 42 skills, 12 agents. Elite SEO strategy with AI traffic tracking, IndexNow, GSC-GA4 cross-reference, CTR anomalies, brand filtering, keyword intelligence, index doctor, pentest, ship, launch, grow, rescue.", "name": "Houseofmvps",
"schemaVersion": "1.0", "name": "slidev-dev-marketplace", "description": "Evidence-based presentation creation with Slidev, enforced design guardrails, and multi-platform diagrams",
Spotdb is an ephemeral data sandbox for AI workflows that provides secure, isolated database environments for agentic AI systems. It's useful for developers building AI agents and LLM applications that need safe data isolation and guardrails.
"description": "Docs-first, TDD-driven development workflow for Claude Code", "author": { "name": "hardness1020" }, "homepage": "https://github.com/hardness1020/VibeFlow",
Atlas Guardrails provides context packing and duplicate detection tools to help AI coding assistants manage large codebases efficiently and avoid redundant code generation. Developers working on large projects benefit from cleaner context and reduced code duplication.
SafeRun provides automatic safety guardrails for AI agents by classifying shell commands as BLOCK, ASK, or ALLOW before execution, preventing dangerous operations like force pushes and recursive deletes. Developers and AI systems that execute shell commands benefit from this protection without needing manual configuration.