←back to thread

Google AI Ultra

(blog.google)
320 points mfiguiere | 5 comments | | HN request time: 0.722s | source
Show context
charles_f ◴[] No.44045393[source]
This is the kind of pricing that I expect most AI companies are gonna try to push for, and it might get even more expensive with time. When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

Whether you find that you get $250 worth out of that subscription is going to be the big question

replies(5): >>44045528 #>>44045820 #>>44045959 #>>44046010 #>>44058223 #
Wowfunhappy ◴[] No.44046010[source]
> When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

Moore's law should help as well, shouldn't it? GPUs will keep getting cheaper.

Unless the models also get more GPU hungry, but 2025-level performance, at least, shouldn't get more expensive.

replies(3): >>44046119 #>>44046175 #>>44046799 #
1. godelski ◴[] No.44046175[source]
Not necessarily. The prevailing paradigm is that performance scales with size (of data and compute power).

Of course, this is observably false as we have a long list of smaller models that require fewer resources to train and/or deploy with equal or better performance than larger ones. That's without using distillation, reduced precision/quantization, pruning, or similar techniques[0].

The real thing we need is more investment into reducing computational resources to train and deploy models and to do model optimization (best example being Llama CPP). I can tell you from personal experience that there is much lower interest in this type of research and I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?"[1] I'd also argue that this is important because there's not infinite data nor compute.

[0] https://arxiv.org/abs/2407.05694

[1] Those works will out perform the larger models. The question is good, but this creates a barrier to funding. Costs a lot to test at scale, you can't get funding if you don't have good evidence, and it often won't be considered evidence if it isn't published. There's always more questions, every work is limited, but smaller compute works have higher bars than big compute works.

replies(2): >>44046239 #>>44050627 #
2. jorvi ◴[] No.44046239[source]
Small models will get really hot once they start hitting good accuracy & speed on 16GB phones and laptops.
replies(1): >>44046795 #
3. godelski ◴[] No.44046795[source]
Much of this already exists. But if you're expecting identical performance as the giant models, well that's a moving goalpost.

The paper I linked explicitly mentions how Falcon 180B is outperformed by Llama-3 8B. You can find plenty of similar cases all over the lmarena leader board. This year's small model is better than last year's big model. But the Overton Window shifts. GPT3 was going to replace everyone. Then 3.5 came out at GPT 3 is shit. Then o1 came out and 3.5 is garbage.

What is "good accuracy" is not a fixed metric. If you want to move this to the domain of classification, detection, and segmentation, the same applies. I've had multiple papers rejected where our model with <10% of the parameters of a large model matches performance (obviously this is much faster too).

But yeah, there are diminishing returns with scale. And I suspect you're right that these small models will become more popular when those limits hit harder. But I think one of the critical things that prevents us from progressing faster is that we evaluate research as if they are products. Methods that work for classification very likely work for detection, segmentation, and even generation. But this won't always be tested because frankly, the people usually working on model efficiency have far fewer computational resources themselves. Necessitating that they run fewer experiments. This is fine if you're not evaluating a product, but you end up reinventing techniques when you are.

4. sgarland ◴[] No.44050627[source]
> I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?" I'd also argue that this is important because there's not infinite data nor compute.

Welcome to cloud world, where devs believe that compute is in fact infinite, so why bother profiling and improving your code? You can just request more cores and memory, and the magic K8s box will dutifully spawn more instances for you.

replies(1): >>44058165 #
5. godelski ◴[] No.44058165[source]
My favorite is retconning Knuth's "Premature optimization is the root of all evil" from "get a fucking profiler" to "you heard it! Don't optimize!"