Skip to content
Prompt

para-zero — Copilot Instructions

by SecScholar

AI Summary

Heuristic scoring (no AI key configured).

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to add the "para-zero — Copilot Instructions" prompt rules to my project.
Repository: https://github.com/SecScholar/para-zero

Please read the repo to find the rules/prompt file, then:
1. Download it to the correct location (.cursorrules, .windsurfrules, .github/prompts/, or project root — based on the file type)
2. If there's an existing rules file, merge the new rules in rather than overwriting
3. Confirm what was added

Description

Autonomous Zero-Day Synthesis Engine. [Internal Use Only]

Architecture Overview

Three-Layer Pipeline: • Intelligence (pulse_monitor.py): Aggregates CVE data from NVD/RSS feeds → data/research_queue/ • Synthesis (core/template_gen.py): LLM generates verification code → validates with AST firewall → deploys to modules/ • Execution (engine.py): Runs verification modules on test traffic; logs findings to findings.json Critical Data Flow: ` CVE JSON (research_queue/) ↓ [TemplateSynthesizer reads] ↓ [LLMClient generates code] ↓ [ASTValidator firewall checks] ├→ SAFE: modules/cve_XXXX.py [hot-loaded by engine.py] └→ UNSAFE: data/quarantine/ [rejected] `

Copilot Instructions for Para-Zero

Para-Zero is a Dynamic Application Security Testing Research Framework that automates vulnerability verification through AI-generated code. This document guides AI agents to be immediately productive.

Multi-Provider LLM Architecture (NEW)

Para-Zero now supports local-first (Ollama) + cloud fallback LLM providers: Provider Hierarchy: • Primary: Ollama (local, OpenAI-compatible, http://localhost:11434/v1) • Fallback: OpenAI (GPT-4), Anthropic (Claude) Task-Aware Model Selection: • task_type="reasoning" → Uses MODEL_REASONING (default: llama3) • task_type="coding" → Uses MODEL_CODING (default: deepseek-coder-v2) Configuration: All settings in config.py via environment variables: `python LLM_PROVIDER = os.getenv("LLM_PROVIDER", "ollama") OLLAMA_BASE_URL = "http://localhost:11434/v1" MODEL_REASONING = "llama3" MODEL_CODING = "deepseek-coder-v2" ` Academy Mode (New): train.py ingests CTF writeups and local knowledge: `bash python train.py --url "https://ctf-writeup.com" --tags "RCE" "Spring" python train.py --file research.md --tags "CVE-2024-1234" ` Extracts content → sanitizes to 6,000 words → saves to data/research_queue/ with ACADEMY_MODE tag.

System Prompts (Task-Specific)

Reasoning Prompt (for CTF/knowledge analysis): • Extracts key vulnerability concepts • Summarizes exploit techniques educationally • Focuses on defensive lessons Coding Prompt (for verification module synthesis): ` "Inherit from BaseVerifier. Implement verify(target_url, session). Return True if vulnerable, False otherwise. Non-destructive only. FORBIDDEN: os, sys, subprocess, eval, exec, open(write), while True." `

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted 2mo ago
Active
AdoptionUnder 100 stars
0 ★ · Niche
DocsMissing or thin
Undocumented

GitHub Signals

Issues0
Updated2mo ago
View on GitHub
No License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Any AI assistant that accepts custom rules or system prompts

Claude
ChatGPT
Cursor
Windsurf
Copilot
+ more