←back to thread

55 points mrqjr | 3 comments | | HN request time: 0.625s | source

I recently built a small open-source tool to benchmark different LLM API endpoints — including OpenAI, Claude, and self-hosted models (like llama.cpp).

It runs a configurable number of test requests and reports two key metrics: • First-token latency (ms): How long it takes for the first token to appear • Output speed (tokens/sec): Overall output fluency

Demo: https://llmapitest.com/ Code: https://github.com/qjr87/llm-api-test

The goal is to provide a simple, visual, and reproducible way to evaluate performance across different LLM providers, including the growing number of third-party “proxy” or “cheap LLM API” services.

It supports: • OpenAI-compatible APIs (official + proxies) • Claude (via Anthropic) • Local endpoints (custom/self-hosted)

You can also self-host it with docker-compose. Config is clean, adding a new provider only requires a simple plugin-style addition.

Would love feedback, PRs, or even test reports from APIs you’re using. Especially interested in how some lesser-known services compare.

1. swyx ◴[] No.44416480[source]
idk what it is but buying that domain made it seem more commercial and therefore less trustworthy. also most people prob want to just use artificialanalysis' numbers rather than self run benchmarks (but this is ok if want to run your own)
replies(2): >>44418597 #>>44425617 #
2. mrqjr ◴[] No.44418597[source]
I honestly don't know how to make you feel credible about this project, it was just a demo site and didn't have any features that you had to pay to use, I just simply felt like I was making something that might be of some use to someone else as well.
3. MitPitt ◴[] No.44425617[source]
If it was named 'api arena' everyone would eat it up