AI SummaryArmBench-LLM is a system prompt framework for evaluating large language models on Armenian language tasks through structured multiple-choice questions. It's designed for developers and AI researchers who need standardized benchmarking tools across popular coding assistants and chat platforms.
Description
A comprehensive Armenian model evaluation framework for benchmarking large language models (LLMs).
Install
# Download system prompt curl -o SYSTEM_PROMPT.md "https://raw.githubusercontent.com/Metricam/ArmBench-LLM/main/eval_datasets/mathematics/task1_system_prompt.md"
Quality Score
D
Below Average
59/100
Standard Compliance72
Documentation Quality68
Usefulness65
Maintenance Signal40
Community Signal31
Scored Today
Trust & Transparency
No License Detected
Review source code before installing
Verified Open Source
Hosted on GitHub — publicly auditable
Maintained
Last commit 10mo ago
6 stars
0 forks
My Fox Den
Community Rating
Works With
Claude Code
claude_desktop
Cursor
Windsurf
ChatGPT