←back to thread

577 points simonw | 6 comments | | HN request time: 0s | source | bottom
1. maksimur ◴[] No.44725348[source]
A $xxxx 2.5 year old laptop, one that's probably much more powerful than an average laptop bought today and probably next year as well. I don't think it's a fair reference point.
replies(3): >>44725487 #>>44725524 #>>44727645 #
2. parsimo2010 ◴[] No.44725487[source]
The article is pretty good overall, but the title did irk me a little. I assumed when reading "2.5 year old" that it was fairly low-spec only to find out it was an M2 Macbook Pro with 64 GB of unified memory, so it can run models bigger than what an Nvidia 5090 can handle.

I suppose that it could be intended to be read as "my laptop is only 2.5 years old, and therefore fairly modern/powerful" but I doubt that was the intention.

replies(1): >>44725526 #
3. bprew ◴[] No.44725524[source]
His point isn't that you can run a model on an average laptop, but that the same laptop can still run frontier models.

It speaks to the advancements in models that aren't just throwing more compute/ram at it.

Also, his laptop isn't that fancy.

> It claims to be small enough to run on consumer hardware. I just ran the 7B and 13B models on my 64GB M2 MacBook Pro!

From: https://simonwillison.net/2023/Mar/11/llama/

replies(1): >>44733482 #
4. simonw ◴[] No.44725526[source]
The reason I emphasize the laptop's age is that it is the same laptop I have been using ever since the first LLaMA release.

This makes it a great way to illustrate how much better the models have got without requiring new hardware to unlock those improved abilities.

5. nh43215rgb ◴[] No.44727645[source]
About $3700 laptop...
6. ◴[] No.44733482[source]