I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
[0] https://www.techradar.com/phones/android/ive-seen-it-its-inc...
MS put out surface & surface laptop with it, Lenovo did do the ThinkPad X1 with it, and Dell put it in the XPS line.
When laptop OEMs stop catering to the lowest common denominator corporate IT purchasers (departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap).
The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!
X13s was confirmed to be sunset, another T14s is the most likely candidate among the ThinkPads.
Only major exception is: - Android Studio's Emulator (although, the IDE does work)
M-series instant wake from sleep is also years ahead of the Windows wakeup roulette, so even if this new processor helps with time away from chargers... we still have the Windows sleep/hibernate experience.
Clearly it's a priority because the support for ChromeOS/android support is a big headline this year.
[1] https://discourse.ubuntu.com/t/ubuntu-24-10-concept-snapdrag...
Also worth noting that not all the bits needing support are inside of the Snapdragon, so specific vendor support from Dell, Lenovo etc is required.
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
This is one place where windows has an advantage over linux. Window's longterm support for device drivers is generally really good. A driver written for Vista is likely to run on 11.
Ironically M1 chip is better supported on Linux.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
Anything modern and popular and you can probably get it recompiled to ARM64
I've always been curious about just how much Rosetta magic is the implementation and how much is TSO; Prism in Windows 24H2 is also no slouch. If the recompiler is decent at tracing data dependencies it might not have to fence that much on a lot of workloads even without hardware TSO.
You effectively get an actual Linux distro + most of android, with a side of Chrome. It's way closer to "a real computer" than an iPad for instance, and only loses to the Surface Pro/Z13 line in term of versatility IMHO.
It really wasn't bad, my only deal breakers were keyboard remapping being non existent and the bluetooth stack being flaky.
It really depends on what Laptop line you buy. Dells have overwhelmingly become garbage, right next to HP.
Speaker quality on a laptop oth? Couldn't care less, I use headphones/earbuds 99% of the time because If I'm going portable computer, I'm traveling and I don't want to be an inconsiderate arse.
Most Apple Silicon is much less than 800 GB/s.
The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.
It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.
For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.
Though having said that, in the past year I've replaced ChromeOS with desktop Linux (postmarketOS) and I love it even more now. 4GB of RAM was a bit slim for running everything in micro-VMs for "security," which is what ChromeOS does. I've had no trouble with battery life or Android emulation (Waydroid) since switching.
Translation: departments which don't care about worker's wellbeing.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
Old situation: "Android drivers" are technically Linux drivers in that they are drivers which are built for a specific, usually ancient, version of Linux with no effort to upstream, minimal effort to rebase against newer kernels, and such poor quality that there's a reason they're not upstreamed.
New situation: "Android drivers" are largely moved to userspace, which does have the benefit of allowing Google to give them a stable ABI so they might work against newer kernels with little to no porting effort. But now they're not really Linux drivers.
In neither case does it really help as much as you'd hope.
I'm not a huge fan of working in WSL, because I actively dislike the Windows GUI.
ARM seems to be popular in the server space and it’s nice to see it trickling down to the PC market.
Maybe more ISA diversity will incentivize publishers to improve long-term software support but I have little hope.
https://www.videocardbenchmark.net/gpu.php?gpu=Snapdragon+X+...
https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+6...
Given their top model underperforms the most common M4 chip and the M5 is about to be released it's not very impressive at all.
Even the old M2 Max in my early 2023 MacBook Pro has 400GB/s.
When I'm home, I often just remote desktop into my laptop.
I'm wondering if remoting into ARM Windows is as good?
Adobe apps that ran fine on Rosetta didn't work at all on Prism.
https://www.pcmag.com/articles/how-well-does-windows-on-arms...
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...
https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...
I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
And manpower in FOSS is always limited.
Plus they’ve been through the Apple Silicon change, so it’s not the first time they’ve been on non-x86 either.
They also know the score. Intel is not in a good place, and Apple has been showing them up in lower power segments like laptops, which happen to be the #1 non-server segment by far.
They don’t want to risk getting stuck the way Apple did three times (68k, POC, Intel) where someone else was limiting their sales.
So they’re laying groundwork. If it’s a backup plan, they’re ready. If ARM takes off and x86 keeps going well, they’re even better off.
I agree it seems incredibly unlikely that you’re doing multiple days of eight hours of work without charging.
Longer is always better, so if it’s true at all great for them.
The "enterprise" managability and reduced attack surface is driving Google to jack up Chromebook prices. The "Chromebook Plus" models are nearing the same price as a midrange Dell Inspiron, HP OmniBook, or Lenovo IdeaPad. You may have also noticed M4 MacBook Airs can be bought for the price of an iPhone 17, and I suspect that's partially a response from Apple to the Chromebook price increases. Buying a $600 Chromebook might have been sane for someone tired of Microsoft and not interested in a $1000 Macbook Air, but in 2025, with the Macbook Air prices going down significantly[2], Chromebooks are not as appealing to regular consumers (different story for businesses).
[0] https://support.google.com/chromeosflex/answer/11513094?sjid...
[1] https://chromeos.google/business-solutions/use-case/contact-...
[2] https://www.zdnet.com/article/the-m4-macbook-air-is-selling-...
edit: Also, not knocking the Qualcomm folks working on Linux here, just observing that the lack of hardware documentation doesn't exactly help reeling in contributors.
[^1]: Maybe in some cases not as useful as it could be when bringing up some OS on hardware, but certainly better than nothing
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
Not sure what a prime core is.
For comparison the M4 Pro can go as high as 10 performance cores and 4 efficiency cores.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
Cool if one wants to CLI stuff alongside Web and Android apps, but that is as far as it goes for GNU/Linux, with many yes but.
https://chromium.googlesource.com/chromiumos/docs/+/1792b43f...
Mind you, Geekerwan managed to push the A19 Pro to 4019 in Geekbench 6 by using active cooling. https://youtu.be/Y9SwluJ9qPI
This is a misinterpretation of what the author wrote! There is a real and significant performance impact in emulating x86 TSO semantics on non-TSO hardware. What the author argues is that enabling TSO process-wide (like macOS does with Rosetta) resolves this impact but it carries counteracting overhead in non-emulated code (such as the emulator itself or in ARM64EC).
The claimed conclusion is that it's better to optimize TSO emulation itself rather than bruteforce it on the hardware level. The way Microsoft achieved this is by having their compiler generate metadata about code that requires TSO and by using ARM64EC, which forwards any API calls to x86 system libraries to native ARM64 builds of the same libraries. Note how the latter in particular will shift the balance in favor of software-based TSO emulation since a hardware-based feature would slow down the native system libraries.
Without ecosystem control, this isn't feasible to implement in other x86 emulators. We have a library forwarding feature in FEX, but adding libraries is much more involved (and hence currently limited to OpenGL and Vulkan). We're also working on detecting code that needs TSO using heuristics, but even that will only ever get us so far. FEX is mainly used for gaming though, where we have a ton of x86 code that may require TSO (e.g. mono/Unity) but wouldn't be handled by ARM64EC, so the balance may be in favor of hardware TSO either way here.
For reference, this is the paragraph (I think) you were referring to:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
Acer, Asus, Dell, HP, Lenovo, Microsoft, Samsung
Looking at the SOCs used, only Dell, Microsoft, and Samsung used the 2nd fastest SoC, the X1E-80-100 - the Dell and Microsoft laptops could be configured with 64GB soldered.
Samsung also used the fastest SoC (the only OEM to do so), the X1E-84-100. From a search of their USA website, you're stuck with only 16GB on any of their Snapdragon laptops. :(
I'd hope whichever OEM(s) uses the Snapdragon X2 Elite Extreme SoC (X2E-96-100) allows users to configure RAM up to 64GB or 128GB.
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
https://www.pcworld.com/article/2375677/surface-laptop-2024-...
X2 Elite shouldn't be that different I think.
also see https://wccftech.com/snapdragon-x2-elite-extreme-die-package...
Anyway, before dropping 32bit, they've dropped PowerPC.
Another consideration, Apple is the king of dylib, you're usually dynamically linking to the OS frameworks/libs. so they can actually plan their glue smarter so the frameworks would still work in native arch. (that was really important with PPC->Intel where you also had big endian...)
I think another reason is Apple's control over the platform vs Microsoft's. Apple has the ability to say "we're not going to make any more x86 computers, you're gonna have to port your software to ARM", while Microsoft doesn't have that ability. This means that Snapdragon has to compete against Intel/AMD on its own merits. A couple months after X Elite launched, Intel started shipping laptops with the Lunar Lake architecture. This low-power x86 architecture managed to beat X Elite on battery life and thermals without having to deal with x86 emulation or poor driver support. Of course it didn't solve Intel's problems (especially since it's fabricated at TSMC rather than by Intel), but it demonstrated that you could get comparable battery life without having to switch architectures, which took a lot of wind out of X Elite's sails.
As an example, the Snapdragon 700-series had Prime, Gold, and Silver branding on it's cores.
I will not spend money on hardware no one can reliably patch or write drivers for. I also want other operating system maintainers to be able to write drivers and get booting.
With them only merging upstream now, it'll be a while before you can actually use Linux on these devices. You can build your own kernel from upstream, but it's probably a better idea to wait until Arch or Gentoo package the necessary pre-configured kernels.
From what I can tell, the Elite SoCs are a lot less outdated-semi-proprietary-Linux-fork-y than many other Qualcomm chips.
> In my opinion, TSO is a red herring that isn't really improving performance, but it sounds nice on paper.
That's the author directly saying that TSO isn't the major emulation performance gain that people think it is. You're correct that there are countering effects between TSO's benefits to the emulated code vs. the negative effects on the emulator and other non-emulated code in the same process that are fine running non-TSO, but to users, this distinction doesn't matter. All that matters is the performance of emulated program as a whole.
As for the volatile metadata, you're correct that MSVC inserts additional data to aid the emulation. What's not so great is that:
- It was basically an almost undocumented, silent addition to MSVC.
- In some cases, it will slow down the generated x64 code slightly by adding NOPs where necessary to disambiguate the volatile access metadata.
- It only affects code statically compiled with a recent version of MSVC (late VS2019 or later). It doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
Unfortunately, firmware and OS support are hard for any vendor, especially one as small (compared to, say, Lenovo or HP) and fast-moving as Framework. Spreading that to yet another ISA and driver ecosystem seems like it would drag down quality and pace of updates on every other system, which IMHO would be a bad trade.
BTW. A more common term for what Rosetta does is "binary translation". A "transpiler" typically compiles from one high-level language to another, never touching machine code.
Note that when the Windows host is invisibly running under Hyper-V, your other Hyper-V VMs are its "siblings" and not nested children. You're not using nested virtualization in that situation. It's only when running a Hyper-V VM inside another Hyper-V VM. WSL2 is a Hyper-V VM, so if you want to run WSL2 inside a Windows Hyper-V VM which is inside your Windows host, it ends up needing to nest.
Qualcomm has beem mainlining Snapdragon X drivers to the 6.x kernel tree for over a year now. There have been multiple frontpage HN posts about this in the past 12 months.
Webcam/mic/speaker support may be a WIP depending on your model, but snapdragon X Elite has been booting Linux for months now, using only drivers in Linus' tree. The budget chips (Snapdragon X Plus) have far less direct support form Qualcomm, but some independent hackers have put in heroic effort to make those run Linux too.
https://lore.kernel.org/lkml/20250925-v3_glymur_introduction...
EL2 support is huge, means virtualization will work on non-Windows OSes (e.g: Linux KVM), unlike with previous gen.
(the downstream asahi kernel supports TSO)
That's just creation of a recovery drive for anything that Microsoft itself makes. It's the same process for the Intel Surface devices too.
>no Media Creation Tool
Why would anyone care about that? Most actively avoid Microsoft's media creation tool and use Rufus instead.
A better question: can a small company like Framework or even MNT Research build and support an open laptop around this chip?
I think we agree in our understanding, but condensing it down to "TSO isn't as much of a deal as claimed" is misleading:
* Efficient TSO emulation is crucial (both on Windows and elsewhere)
* The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
* Hardware TSO is still of tremendous value on systems that don't have ecosystem support
> [volatile metadata] doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
That's funny, I hadn't considered third party compilers. Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are. (Same for older titles that were compiled before volatile metadata was added)
Yes, but this is not in contention...? No one is disputing that TSO semantics in the emulated x86 code need to be preserved and that it needs to be done fast, we're talking about the tradeoffs of also having TSO support on the host platform.
> The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
> Hardware TSO is still of tremendous value on systems that don't have ecosystem support
That isn't what the author said. From the article:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
That is a direct statement on Rosetta/macOS and does not mention Prism/Windows. How correct that assessment may be is another matter, but it is not talking about Windows only.
> Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are.
I will have to check this, I don't think it's that bad. JITted programs run much, much better on my Snapdragon X device than the older Snapdragon 835, but there are a lot of variables there (CPU much faster/wider, Windows 11 Prism vs. Windows 10 emulator, x86 vs x64 emulation). I have a program with native x64/ARM64 builds that runs at -25% speed in emulated x64 vs native ARM64, I'm curious myself to see how it runs with volatile metadata disabled.
The interesting part is when the compatibility settings for the executables are modified to change the default multi-core setting from Fast to Strict Multi-Core Operation. In that mode, the build without volatile metadata runs about 20% slower than the default build. That indicates that the x64 emulator may be taking some liberties with memory ordering by default. Note that while this application is multithreaded, the worker threads do little and it is very highly single thread bottlenecked.
As someone with a first gen, the device trees are, as I understand it, one of the issues with trying to just install any distro, except for that special Ubuntu one.
I can't just (for example) grab the latest fedora, and try and run that.
Now, I haven't tried the latest beta of Fedora 43, but my guess is this won't change.
ACPI enters the chat... It can send pieces of code interpreted by the kernel on any hardware event.
I have a Framework laptop and yeah the ACPI firmware is totally buggy and the Linux kernel fails at interpreting it in various cases.
The reality is this company is notoriously a law firm with a small technical staff on the side.