Skip to content
Skill

claw-compactor

by aeromomo

AI Summary

Claw Compactor is a 6-layer token compression skill for OpenClaw agents that reduces workspace token spend by 50–97% through deterministic rules and an LLM-driven memory system called Engram. It's designed for developers building token-efficient AI agents who need automatic cost optimization at session start.

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to install the "claw-compactor" skill in my project.

Please run this command in my terminal:
# Install skill into the correct directory
mkdir -p .claude/skills/claw-compactor && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/claw-compactor/SKILL.md "https://raw.githubusercontent.com/aeromomo/claw-compactor/main/SKILL.md"

Then restart Claude Code (or reload the window in Cursor) so the skill is picked up.

Description

Claw Compactor — 6-layer token compression skill for OpenClaw agents. Cuts workspace token spend by 50–97% using deterministic rule-engines plus Engram: a real-time, LLM-driven Observational Memory system. Run at session start for automatic savings reporting.

Overview

Claw Compactor reduces token usage across the full OpenClaw workspace using 6 compression layers: | Layer | Name | Cost | Notes | |-------|------|------|-------| | 1 | Rule Engine | Free | Dedup, strip filler, merge sections | | 2 | Dictionary Encoding | Free | Auto-codebook, $XX substitution | | 3 | Observation Compression | Free | Session JSONL → structured summaries | | 4 | RLE Patterns | Free | Path/IP/enum shorthand | | 5 | Compressed Context Protocol | Free | Format abbreviations | | 6 | Engram | LLM API | Real-time Observational Memory | Skill location: skills/claw-compactor/ Entry point: scripts/mem_compress.py Engram CLI: scripts/engram_cli.py ---

Prerequisites

`bash export ANTHROPIC_API_KEY=sk-ant-... # Preferred

Auto Mode (Recommended — Run at Session Start)

`bash python3 skills/claw-compactor/scripts/mem_compress.py <workspace> auto ` Automatically compresses all workspace files, tracks token counts between runs, and reports savings. Run this at the start of every session. ---

Full Pipeline (All Layers)

`bash python3 scripts/mem_compress.py <workspace> full ` Runs all 5 deterministic layers in optimal order. Typical: 50%+ combined savings.

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted 1mo ago
Active
Adoption1K+ stars on GitHub
1.3k ★ · Popular
DocsREADME + description
Well-documented

GitHub Signals

Stars1.3k
Forks110
Issues3
Updated1mo ago
View on GitHub
MIT License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Claude Code