←back to thread

1303 points serjester | 1 comments | | HN request time: 0.209s | source
Show context
xnx ◴[] No.42955622[source]
Glad Gemini is getting some attention. Using it is like a superpower. There are so many discussions about ChatGTP, Claude, DeepSeek, Llama, etc. that don't even mention Gemini.
replies(3): >>42955696 #>>42955982 #>>42956190 #
throwaway314155 ◴[] No.42955696[source]
Google had a pretty rough start compared to ChatGPT, Claude. I suspect that left a bad taste in many people's mouths. In particular because evaluating so many LLM's is a lot of effort on its own.

Llama and DeepSeek are no-brainers; the weights are public.

replies(1): >>42955879 #
beastman82 ◴[] No.42955879[source]
No brainer if you're sitting on a >$100k inference server.
replies(2): >>42956391 #>>42969919 #
1. throwaway314155 ◴[] No.42956391[source]
Sure, that's fair. If you're aiming for state of the art performance. Otherwise, you can get close and do it on reasonably priced hardware by using smaller distilled and/or quantized variants of llama/r1.

Really though I just meant "it's a no-brainer that they are popular here on HN".