←back to thread

447 points stephenheron | 3 comments | | HN request time: 0s | source

Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen

Show context
ben-schaaf ◴[] No.45023206[source]
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

replies(14): >>45023243 #>>45023603 #>>45023693 #>>45023904 #>>45023939 #>>45023972 #>>45024390 #>>45024405 #>>45024494 #>>45025515 #>>45026011 #>>45026727 #>>45026857 #>>45027696 #
aurareturn ◴[] No.45023972[source]

  All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

replies(2): >>45024922 #>>45027555 #
jandrewrogers ◴[] No.45027555[source]
> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

replies(1): >>45028607 #
1. aurareturn ◴[] No.45028607[source]

  For high-throughput server software x86 is significantly more efficient than Apple Silicon.
In the server space, x86 has the highest performance right now. Yes. That's true. That's also because Apple does not make server parts. Look for Qualcomm to try to win the server performance crown in the next few years with their Oryon cores.

That said, Graviton is at least 50% of all AWS deployments now. So it's winning vs x86.

  ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial.
I think you'll have to define what top-end means and what performance engineering means.
replies(1): >>45034069 #
2. ksec ◴[] No.45034069[source]
I dont think the point Amazon uses ARM was about performance but purely cost optimisation. At one point, nearly 40% of Intel's server revenue was coming from Amazon. They just figure it out at their scale it would be cheaper to do it themselves.

But I am purely guessing ARM has risen their price per core so it makes less financial sense to do a yearly update on CPU. They are also going into Server CPU business meaning they now have some incentives to keep it all to themselves. Which makes the Nvidia moves really smart as they decided to go for the ISA licences and do it by themselves.

replies(1): >>45036937 #
3. aurareturn ◴[] No.45036937[source]
Server CPUs do not win on performance alone. They win on performance/$, LTV/$, etc. That's why Graviton is winning on AWS.