←back to thread

86 points grace77 | 1 comments | | HN request time: 0.205s | source

I’ve been using AI to generate some repetitive frontend (guilty), and while most outputs felt vibe-coded, some results were surprisingly good. So I cleaned it up and made a ranking game out of it with friends, and you can check it out here: https://www.designarena.ai/vote

/vote: Your prompt will be answered by four random, anonymous models. You pick the one you prefer and crown the winner, tournament-style.

/leaderboard: See the current winning models, as dictated by voter preferences.

/play: Iterate quickly by seeing four models respond to the same input and pressing space to regenerate the results you don’t lock-in.

We were especially impressed with the quality of DeepSeek and Grok, and variance between categories (To judge by the results so far, OpenAI is very good for game dev, but seems to suck everywhere else).

We’ve learned a lot, and are curious to hear your comments and questions. Excited to make this better!

Show context
coryvirok ◴[] No.44543135[source]
This is really good! It would be really cool to somehow get human designs in the mix to see how the models compare. I bet there are curated design datasets with descriptions that you could pass to each of the models and then run voting as a "bonus" question (comparing the human and AI generated versions) after the normal genAI voting round.
replies(2): >>44543219 #>>44544648 #
1. debesyla ◴[] No.44544648[source]
This would be extra interesting for unique designs - something more experimental, new. As as for now even when you ask AI to break all rules it still outputs standard BS.