AI SummaryAn adversarial validation agent that challenges expertise claims and mental models to prevent overconfident reasoning. Useful for researchers, engineers, and decision-makers who need rigorous validation of their assumptions.
Description
Adversarial validation agent that actively tries to DISPROVE expertise claims. Prevents confident drift by challenging mental models before they auto-update.
Install
# Add AGENTS.md to your project root curl -o AGENTS.md "https://raw.githubusercontent.com/DNYoussef/context-cascade/main/agents/foundry/expertise/expertise-adversary.md"
Quality Score
C
Acceptable
67/100
Standard Compliance72
Documentation Quality65
Usefulness58
Maintenance Signal80
Community Signal61
Scored Today
Trust & Transparency
Open Source — MIT
Source code publicly auditable
Verified Open Source
Hosted on GitHub — publicly auditable
Actively Maintained
Last commit 1mo ago
20 stars
6 forks
My Fox Den
Community Rating
Works With
Claude Code
claude_desktop