←back to thread

86 points grace77 | 4 comments | | HN request time: 0.413s | source

I’ve been using AI to generate some repetitive frontend (guilty), and while most outputs felt vibe-coded, some results were surprisingly good. So I cleaned it up and made a ranking game out of it with friends, and you can check it out here: https://www.designarena.ai/vote

/vote: Your prompt will be answered by four random, anonymous models. You pick the one you prefer and crown the winner, tournament-style.

/leaderboard: See the current winning models, as dictated by voter preferences.

/play: Iterate quickly by seeing four models respond to the same input and pressing space to regenerate the results you don’t lock-in.

We were especially impressed with the quality of DeepSeek and Grok, and variance between categories (To judge by the results so far, OpenAI is very good for game dev, but seems to suck everywhere else).

We’ve learned a lot, and are curious to hear your comments and questions. Excited to make this better!

1. a2128 ◴[] No.44543250[source]
I tried the vote and both results always suck, there's no option to say neither are winners. Also it seems from the network tab you're sending 4 (or 5?) requests but only displaying the first two that respond, which biases it to the small models that respond more quickly which usually results in showing two bad results
replies(2): >>44543261 #>>44543361 #
2. ethan_smith ◴[] No.44543261[source]
Adding a "neither is good" option would improve data quality by preventing forced choices between two poor designs.
replies(1): >>44543308 #
3. grxxxce ◴[] No.44543308[source]
this is a great note — will be sure to add!
4. grace77 ◴[] No.44543361[source]
Yes — great point. We originally waited for all model responses and randomized the vote order, but that made it a very bad user experience -- some models, especially open-source ones, took over 4 minutes to respond, leading to a high voter drop-off rate.

To preserve the voter experience without introducing bias, our current approach waits for the slowest model within each binary comparison — so even if one model is faster, we don’t display until both are ready. You're right that this does introduce some bias for the two smallest models, and we'd love to hear suggestions for how to make this better!

As for the 5th request: we actually kick off one reserve model alongside the four randomly selected for the tournament. This backup isn’t shown unless one of the four fails — it’s not the fastest or lowest-latency model, just a randomly selected fallback to keep the system robust without skewing results.