Skip to content
Agent

Forge — Execution Agents & Provider Layer

by rtuosto

AI Summary

Forge provides a framework and provider abstraction layer for building execution agents that autonomously write, review, test, debug, and deploy code across multiple LLM platforms. Developers building AI-driven development tools, CI/CD automation, and code generation systems will benefit from this structured agent foundation.

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to set up the "Forge — Execution Agents & Provider Layer" agent in my project.

Please run this command in my terminal:
# Add AGENTS.md to your project root
curl --retry 3 --retry-delay 2 --retry-all-errors -o AGENTS.md "https://raw.githubusercontent.com/rtuosto/forge/main/specs/EXECUTION_AGENTS.md"

Then explain what the agent does and how to invoke it.

Description

> Execution agents write code, review it, test it, debug it, and deploy it. This document covers the provider abstraction, agent framework, prompts, tool loop, and agent configuration.

Forge — Execution Agents & Provider Layer

> Execution agents write code, review it, test it, debug it, and deploy it. This document covers the provider abstraction, agent framework, prompts, tool loop, and agent configuration. Cross-references: DATA_MODELS.md for types | INFRASTRUCTURE.md for sandbox and tools | ORCHESTRATION.md for how agents are scheduled ---

Core Interface

`typescript // packages/types/src/provider.ts // ⚠️ Canonical definition: DATA_MODELS.md §5 — repeated here for context. export interface LLMProvider { readonly name: string; // 'anthropic', 'openai', 'ollama' complete(request: CompletionRequest): Promise<CompletionResponse>; countTokens(content: string | Message[]): Promise<number>; supportsTools(): boolean; supportsStreaming(): boolean; supportsImages(): boolean; getModelCapabilities(model: string): ModelCapabilities; } export interface CompletionRequest { model: string; systemPrompt: string; messages: Message[]; tools?: ToolDefinition[]; temperature?: number; // Default: 0 for deterministic, 0.3 for creative maxTokens?: number; stopSequences?: string[]; responseFormat?: 'text' | 'json'; } export interface CompletionResponse { id: string; // Provider's response ID content: ContentBlock[]; // Text and/or tool calls stopReason: 'end_turn' | 'tool_use' | 'max_tokens' | 'stop_sequence'; usage: { inputTokens: number; outputTokens: number; }; model: string; // Actual model used (may differ from request if aliased) latencyMs: number; } export interface Message { role: 'user' | 'assistant' | 'tool_result'; content: string | ContentBlock[]; } export interface ContentBlock { type: 'text' | 'tool_use' | 'tool_result' | 'image'; // For text: text?: string; // For tool_use: toolName?: string; toolInput?: Record<string, unknown>; toolUseId?: string; // For tool_result: toolResultId?: string; toolOutput?: string; isError?: boolean; // For image: mediaType?: string; base64Data?: string; } `

Model Registry

Stored in config/models.yaml. The system uses this to make intelligent model selection decisions. `yaml

config/models.yaml

providers: anthropic: baseUrl: "https://api.anthropic.com" authEnvVar: "ANTHROPIC_API_KEY" models: claude-sonnet-4-5-20250929: displayName: "Claude Sonnet 4.5" contextWindow: 200000 maxOutputTokens: 16384 supportsTools: true supportsImages: true supportsStreaming: true pricing: inputPer1k: 0.003 outputPer1k: 0.015 capabilities: coding: 9 # 1-10 rating reasoning: 9 instruction: 9 speed: 7 tier: "standard" # standard | premium | economy claude-haiku-3-5-20241022: displayName: "Claude Haiku 3.5" contextWindow: 200000 maxOutputTokens: 8192 supportsTools: true supportsImages: true supportsStreaming: true pricing: inputPer1k: 0.0008 outputPer1k: 0.004 capabilities: coding: 7 reasoning: 7 instruction: 8 speed: 10 tier: "economy" openai: baseUrl: "https://api.openai.com/v1" authEnvVar: "OPENAI_API_KEY" models: gpt-4o: displayName: "GPT-4o" contextWindow: 128000 maxOutputTokens: 16384 supportsTools: true supportsImages: true supportsStreaming: true pricing: inputPer1k: 0.0025 outputPer1k: 0.01 capabilities: coding: 8 reasoning: 8 instruction: 8 speed: 8 tier: "standard" ollama: baseUrl: "http://localhost:11434" authEnvVar: null models: codellama:34b: displayName: "CodeLlama 34B" contextWindow: 16384 maxOutputTokens: 4096 supportsTools: false supportsImages: false supportsStreaming: true pricing: inputPer1k: 0 outputPer1k: 0 capabilities: coding: 6 reasoning: 5 instruction: 5 speed: 6 tier: "free" `

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted 1mo ago
Active
AdoptionUnder 100 stars
0 ★ · Niche
DocsREADME + description
Well-documented

GitHub Signals

Issues0
Updated1mo ago
View on GitHub
No License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Claude Code
Claude.ai