AI SummaryByteRover CLI () - Interactive REPL with React/Ink TUI oclif v4, TypeScript (ES2022, Node16 modules, strict), React/Ink (TUI), Zustand, axios, socket.io, Mocha + Chai + Sinon + Nock
Install
Copy this and paste it into Claude Code, Cursor, or any AI assistant:
I want to add the "byterover-cli — Copilot Instructions" prompt rules to my project. Repository: https://github.com/campfirein/byterover-cli Please read the repo to find the rules/prompt file, then: 1. Download it to the correct location (.cursorrules, .windsurfrules, .github/prompts/, or project root — based on the file type) 2. If there's an existing rules file, merge the new rules in rather than overwriting 3. Confirm what was added
Description
ByteRover CLI (brv) - The portable memory layer for autonomous coding agents (formerly Cipher)
CLAUDE.md
ByteRover CLI (brv) - Interactive REPL with React/Ink TUI
Commands
`bash npm run build # Compile to dist/ npm run dev # Kill daemon + build + run dev mode npm test # All tests npx mocha --forbid-only "test/path/to/file.test.ts" # Single test npm run lint # ESLint npm run typecheck # TypeScript type checking ./bin/dev.js [command] # Dev mode (ts-node) ./bin/run.js [command] # Prod mode ` Test dirs: test/commands/, test/unit/, test/integration/, test/hooks/, test/learning/, test/helpers/, test/shared/ Note: Run tests from project root, not within test directories
Development Standards
TypeScript: • Avoid as Type assertions - use type guards or proper typing instead • Avoid any type - use unknown with type narrowing or proper generics • Functions with >3 parameters must use object parameters • Prefer type for data-only shapes (DTOs, payloads, configs); prefer interface for behavioral contracts with method signatures (services, repositories, strategies) Testing (Strict TDD — MANDATORY): • You MUST follow Test-Driven Development. This is non-negotiable. • Step 1 — Write failing tests FIRST: Before writing ANY implementation code, write or update tests that describe the expected behavior. Do NOT write implementation and tests together or in reverse order. • Step 2 — Run tests to confirm they fail: Execute the relevant test file to verify the new tests fail for the right reason (missing implementation, not a syntax error). • Step 3 — Write the minimal implementation: Write only enough code to make the failing tests pass. Do not add untested behavior. • Step 4 — Run tests to confirm they pass: Execute tests again to verify all tests pass. • Step 5 — Refactor if needed: Clean up while keeping tests green. • If you catch yourself writing implementation code without a failing test, STOP and write the test first. • 50% coverage minimum, critical paths must be covered. • Suppress console logging in tests to keep output clean. • Unit tests must run fast and run completely in memory. Proper stubbing and mocking must be implemented. Feature Development (Outside-In Approach): • Start from the consumer (oclif command, REPL command, or TUI component) - understand what it needs • Define the minimal interface - only what the consumer actually requires • Implement the service - fulfill the interface contract • Extract entities only if needed - when shared structure emerges across multiple consumers • Avoid designing in isolation - always have a concrete consumer driving requirements
Source Layout (`src/`)
• agent/ — LLM agent: core/ (interfaces/domain), infra/ (22 modules), resources/ (prompt YAML, tool .txt) • server/ — Daemon infrastructure: config/, core/ (domain/interfaces), infra/ (27 modules), utils/ • shared/ — Cross-module: constants, types, transport events, utils • tui/ — React/Ink TUI: app (router/pages), components, features (20 modules), hooks, lib, providers, stores • oclif/ — Commands, hooks, lib (daemon-client, JSON response utils) Import boundary (ESLint-enforced): tui/ must not import from server/, agent/, or oclif/. Use transport events or shared/.
Discussion
Health Signals
My Fox Den
Community Rating
Sign in to rate this booster
Works With
Any AI assistant that accepts custom rules or system prompts