AI SummaryArmBench-LLM is a system prompt for benchmarking large language models using Armenian character-to-numeric matching tasks. It's designed for developers evaluating LLM performance across multiple coding platforms.
Description
A comprehensive Armenian model evaluation framework for benchmarking large language models (LLMs).
Install
# Download system prompt curl -o SYSTEM_PROMPT.md "https://raw.githubusercontent.com/Metricam/ArmBench-LLM/main/eval_datasets/armenian_history/task4_system_prompt.md"
Quality Score
D
Below Average
55/100
Standard Compliance72
Documentation Quality65
Usefulness48
Maintenance Signal40
Community Signal31
Scored Today
Trust & Transparency
No License Detected
Review source code before installing
Verified Open Source
Hosted on GitHub — publicly auditable
Maintained
Last commit 10mo ago
6 stars
0 forks
My Fox Den
Community Rating
Works With
Claude Code
claude_desktop
Cursor
Windsurf
ChatGPT