←back to thread

My Impressions of the MacBook Pro M4

(michael.stapelberg.ch)
245 points secure | 1 comments | | HN request time: 0.22s | source
Show context
dr_pardee ◴[] No.45775823[source]
> I still don’t like macOS and would prefer to run Linux on this laptop. But Asahi Linux still needs some work before it’s usable for me (I need external display output, and M4 support). This doesn’t bother me too much, though, as I don’t use this computer for serious work.

“I don’t use this computer for serious work.” Dropped $3K on MBP to play around with. Definitely should have gotten MBA

replies(4): >>45775861 #>>45775875 #>>45776174 #>>45778211 #
criddell ◴[] No.45775861[source]
If you are going to start making a list of expensive hobbies, $3K for a computer isn't going to be anywhere near the top of the list.
replies(5): >>45776000 #>>45776238 #>>45779286 #>>45782264 #>>45783277 #
asdff ◴[] No.45776238[source]
The type of person shelling out 3k for a computer is not running it until the wheels come off.
replies(5): >>45776964 #>>45779606 #>>45781683 #>>45781766 #>>45782195 #
gcr ◴[] No.45781766[source]
Bullshit. I shelled $3k for my MBP M1 back in 2021 and I intend to use it until I can’t anymore.

It depends on the person and the use case. Different personalities etc

replies(1): >>45781823 #
omni ◴[] No.45781823[source]
That's not particularly rational given how quickly computers progress in both performance and cost, a current-gen $1k Macbook Air will run circles around your M1. You'd probably be much better off spending the same amount of money on cheaper machines with a more frequent upgrade cadence. And you can always sell your old ones on eBay or something.
replies(3): >>45782236 #>>45782651 #>>45783060 #
1. gcr ◴[] No.45782651[source]
Respectfully, this is also bullshit for my use case. For me, the M1 purchase was a step up compared to Intel; the rest is diminishing returns for now.

It’s also not true if you care about certain workloads like LLM performance. My biggest concern for example is memory size and bandwidth, and older chips compare quite favorably to new chips where “GPU VRAM size” now differentiates the premium market and becomes a further upsell, making it less cost-effective. :( I can justify $3k for “run a small LLM on my laptop for my job as ML researcher,” but I still can’t justify $10k for “run a larger model on my Mac Studio”

See https://github.com/ggml-org/llama.cpp/discussions/4167#discu...