←back to thread

447 points stephenheron | 1 comments | | HN request time: 0.217s | source

Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen

Show context
ben-schaaf ◴[] No.45023206[source]
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

replies(14): >>45023243 #>>45023603 #>>45023693 #>>45023904 #>>45023939 #>>45023972 #>>45024390 #>>45024405 #>>45024494 #>>45025515 #>>45026011 #>>45026727 #>>45026857 #>>45027696 #
jonwinstanley ◴[] No.45024390[source]
A huge reason for the low power usage is the iPhone.

Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

replies(7): >>45024924 #>>45025876 #>>45026131 #>>45026877 #>>45027391 #>>45027647 #>>45032524 #
Cthulhu_ ◴[] No.45027391[source]
I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

[0] https://en.wikipedia.org/wiki/Intel_Atom

replies(2): >>45027775 #>>45035992 #
aidenn0 ◴[] No.45027775[source]
I don't think it was core architecture issues. My impression is that over the years their efforts to get into low-power devices never got the full force of their engineering prowess.
replies(1): >>45033823 #
1. kimixa ◴[] No.45033823[source]
I worked for an IP vendor that was in some Atom SoCs (over a decade ago now though) - from what I remember the perf/w was actually pretty competitive for contemporary ARM devices when we supplied the IP, but then took so long to actually end up in products it ended up behind others - other customers were already on the next generation by that point, even if the initial projects started at about the same time. And the atoms were buggy as hell, never had more problems with dumb cache/fabric/memory controller issues.

To me the Atom team always felt like a dead-end inside intel - everyone seemed to be trying to get in to a different higher-status team ASAP - our engineering contacts often changed monthly, if we even knew who our "contacts" were meant to be at any time. I think any product developed like that would struggle.