AI SummaryArmBench-LLM is a system prompt for benchmarking large language models using Armenian character-to-numeric matching tasks. It's designed for developers evaluating LLM performance across multiple coding platforms.
Install
Copy this and paste it into Claude Code, Cursor, or any AI assistant:
I want to add the "ArmBench-LLM — System Prompt" prompt rules to my project. Repository: https://github.com/Metricam/ArmBench-LLM Please read the repo to find the rules/prompt file, then: 1. Download it to the correct location (.cursorrules, .windsurfrules, .github/prompts/, or project root — based on the file type) 2. If there's an existing rules file, merge the new rules in rather than overwriting 3. Confirm what was added
Description
A comprehensive Armenian model evaluation framework for benchmarking large language models (LLMs).
Discussion
0/2000
Loading comments...
Health Signals
MaintenanceCommitted 1y ago
○ DeadAdoptionUnder 100 stars
6 ★ · NicheDocsREADME + description
Well-documentedNo License
My Fox Den
Community Rating
Sign in to rate this booster
Works With
Any AI assistant that accepts custom rules or system prompts
Claude
ChatGPT
Cursor
Windsurf
Copilot
+ more