Skip to content
Prompt

RAG-prod — System Prompt

by MirxaWaqarBaig

AI Summary

A lightweight production-ready RAG system prompt that integrates contextual chunking and vector retrieval via Chroma, designed for developers building AI-enhanced code editors and IDEs. Developers working with Claude, ChatGPT, or Cursor who need retrieval-augmented generation without heavy infrastructure will benefit from this ready-to-deploy foundation.

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to add the "RAG-prod — System Prompt" prompt rules to my project.
Repository: https://github.com/MirxaWaqarBaig/RAG-prod

Please read the repo to find the rules/prompt file, then:
1. Download it to the correct location (.cursorrules, .windsurfrules, .github/prompts/, or project root — based on the file type)
2. If there's an existing rules file, merge the new rules in rather than overwriting
3. Confirm what was added

Description

System Prompt for RAG-prod

System Prompt RAG (Prompt-Only)

A trimmed-down version of the production Graph RAG server focused on contextual chunking, Chroma vector retrieval, and a unified system-prompt generation path. Neo4j knowledge-graph features and SQL/business-data fusion are deliberately removed to keep the stack lightweight.

Contents

• core.py, core_hybrid.py, cache.py: Retrieval helpers, hybrid fusion, optional PostgreSQL cache manager. • optimized_chunking.py, local_qwen_llm.py: Contextual chunk pipeline and optional Qwen enhancer. • build_chroma_only.py: CLI to rebuild contextual chunks and refresh the Chroma index. • system_rag_service.py, system_rag_server.py: System-prompt RAG service and ZMQ entrypoint. • rag_system_prompt.md: System prompt used for both concise and detailed responses. • input/, input_chunked/, chromadb/, cache/: Document sources, generated chunks, vector store, and auxiliary cache data.

Getting Started

• Install dependencies: pip install -r requirements.txt • Populate input/ with .txt documents. • Run python build_chroma_only.py to generate contextual chunks and build the Chroma index. • Launch the server: python system_rag_server.py serve Environment variables can be stored in .env.example (copy to .env), e.g. LLM credentials. PostgreSQL cache is optional—if the connection fails the server simply operates without caching.

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted 6mo ago
Stale
AdoptionUnder 100 stars
0 ★ · Niche
DocsMissing or thin
Undocumented

GitHub Signals

Issues0
Updated6mo ago
View on GitHub
No License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Any AI assistant that accepts custom rules or system prompts

Claude
ChatGPT
Cursor
Windsurf
Copilot
+ more