I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
When I'm home, I often just remote desktop into my laptop.
I'm wondering if remoting into ARM Windows is as good?
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
And manpower in FOSS is always limited.
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
Anyway, before dropping 32bit, they've dropped PowerPC.
Another consideration, Apple is the king of dylib, you're usually dynamically linking to the OS frameworks/libs. so they can actually plan their glue smarter so the frameworks would still work in native arch. (that was really important with PPC->Intel where you also had big endian...)
I think another reason is Apple's control over the platform vs Microsoft's. Apple has the ability to say "we're not going to make any more x86 computers, you're gonna have to port your software to ARM", while Microsoft doesn't have that ability. This means that Snapdragon has to compete against Intel/AMD on its own merits. A couple months after X Elite launched, Intel started shipping laptops with the Lunar Lake architecture. This low-power x86 architecture managed to beat X Elite on battery life and thermals without having to deal with x86 emulation or poor driver support. Of course it didn't solve Intel's problems (especially since it's fabricated at TSMC rather than by Intel), but it demonstrated that you could get comparable battery life without having to switch architectures, which took a lot of wind out of X Elite's sails.
BTW. A more common term for what Rosetta does is "binary translation". A "transpiler" typically compiles from one high-level language to another, never touching machine code.
Note that when the Windows host is invisibly running under Hyper-V, your other Hyper-V VMs are its "siblings" and not nested children. You're not using nested virtualization in that situation. It's only when running a Hyper-V VM inside another Hyper-V VM. WSL2 is a Hyper-V VM, so if you want to run WSL2 inside a Windows Hyper-V VM which is inside your Windows host, it ends up needing to nest.
That's just creation of a recovery drive for anything that Microsoft itself makes. It's the same process for the Intel Surface devices too.
>no Media Creation Tool
Why would anyone care about that? Most actively avoid Microsoft's media creation tool and use Rufus instead.