As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
M2, M3... that is when I think we will see stellar performance against things like Ryzen.
It's almost certainly better per watt, which I'd expect because the 5950X (and the 6-core 65W TDP 5600X, which also tops the MBA multi-core Geekbench result) are still desktop processors.
I’m excited for whatever is next.
The 5950X cores are actually reasonably power efficient. Anandtech has nice charts here: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...
TL;DR is that 5950X cores draw about 6W each with all cores loaded at around 3.8GHz per core. They scale up to 20W in the edge case where a single core is loaded at a full 5GHz.
> And the M1 is running at a lower clock.
Comparing a power-optimized laptop chip to an all-out, top of the line desktop chip isn't a great way to compare efficiency because power scaling is very nonlinear. The AMD could be made more efficient on a performance-per-watt basis by turning down the clock speed and reducing the operating voltage, but it's a desktop chip so there's no reason to do that.
Look at the power consumption versus frequency scaling in the Anandtech chart for the 5950X: Going from 3.8GHz to 5.0GHz takes the power from about 6W to 20W. That's 230% more power for 30% more clockspeed. Apple is going to run into similar nonlinear power scaling when they move up to workstation class chips.
If you really wanted to compare power efficiency, you'd have to downclock and undervolt the AMD part until it performs similarly to the Apple part. But there's no reason to do that, because no one buying a top of the line 5950X cares about performance per watt, they just want the fastest possible performance.
Comparing to an upcoming Zen3 laptop chip would be a more relevant comparison. The Apple part is still going to win on power efficiency, though.
EDIT:// sorry, i misread/skip the "chip" part.
The difference being that Apple only sells theirs inside of $1000+ computers, and AMD has to make up the entire margin on their CPU alone.
Behold, the power of vertical integration.
Apple often has 2-3 future generations in development. This was just the first complete design they turned into a product.
That RAM design, tho...
It'd be more surprising at this point if it _wasn't_ more powerful.
You can check the clock speeds: https://browser.geekbench.com/v5/cpu/4620493.gb5
Up to 5050MHz is stock behavior for the 5950X and it's using standard DDR4 3200 memory.
Doubtful. You know they've been using ARM-based Macs with the requisite version of macOS for at least a year inside of Apple.
They've done a processor transition two other times; unlike the last two times, this time Apple controls the entire stack, which wasn't the case going from 68K to PowerPC or from PowerPC to Intel.
Apple has been designing their own processors for a decade now. There's nothing in the smartphone/tablet market that even comes close to the performance of the A series in the iPhone and iPad; there's no reason to believe this will be any different.
- Given that you can't add ram after the fact and 256GB is anemic the cheapest laptop that is a reasonable choice is $1400.
- The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.
- The average end user spends $700 on a computer
- We literally have marketing numbers and a worthless synthetic benchmark.
I think it entirely fair to say that the new macs are liable to be fantastic machines but there is no reason to believe that the advent of apple cpu macs marks the end of open hardware. Were you expecting them to sell their cpus to the makers of the cheap computers most people actually buy?
Apple has been running a version of OS X on these CPUs for 10 years now. The only thing which is "beta" here is Rosetta.
I'm not saying they do that, considering how much their products cost, I'm saying they could. That's what vertical integration brings to their table, above all else.
This includes a massive number of corporate desktops which often Apple doesn't really compete with.
> The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.
?? The Mac mini is $600 with an M1 which is likely a far faster computer than most $600 Windows desktop computers. Likely significantly faster.
I don't think Apple is going to eat Windows alive, too many businesses have massive piles of Windows apps. I do see the potential Apple to increase market share significantly though.
I wouldn’t expect them to sell their cpus to others.
It’s weird though that they’re so vertically integrated and able to push performance as high as they have. I really enjoy my Linux system so I’m going to keep on doing that.
And also with RAM and SSD idiotically soldered in so 2 years later you need to spend another $6000, while a couple weeks ago I spent a grand total of $400 to upgrade my 2TB SSD to 4TB.
Like secure boot, just without an off switch
Not really. The M1 may objectively and factually be a very good CPU, but it comes bundled with the cost of being locked into a machine with a locked bootloader and not being able to boot any other OS than MacOS.
And many people will find such a cost unacceptable.
This is the first of their CPUs. The iMac will almost certainly be running a higher end CPU which at the very least supports more RAM. It's likely the 16" MacBook Pro and the higher end 13" MacBook Pro will share a CPU with the iMac the same way the Mac mini and the MacBook Air share a CPU.
Blackberry was the competing “smart” phone [1] and the newest releases were we under half the price of iPhone w the same 2-year discount.
I had the blackberry curve myself at that time and iPhone seemed way high-priced.
[1] https://techcrunch.com/2007/07/25/iphone-v-blackberry-side-b...
Now things are settled a bit I thought may be it isn't as bad as I thought . Had the MacBook Air Priced any lower, it would have seriously hurt their sales of 16" MBP. Once MacBook Pro transition to ARM, with a rumoured of Mini-LED Screen refreshed as 14" and 16". ( MingChiKuo has been extremely accurate with regards to Display Technology used on iPad and Mac ) So MBP wont be lower in price but offer more features ( Mini-LED is quite costly ). And possibly an M2 with HBE? I am not sure how Apple is going to coupe with the bandwidth requirement. It would need to be LPDDR5 Quad Channel at 200GB/s or HBM2 if we assume M2 will double the GPU core again.
May be only then Apple could afford to offer a MacBook 12" at $799. And educational price at $699. Although I am not sure if that is enough, Chrome Book in many classes are going at $299. Apple doesn't have to compete dollar to dollar in pricing, but 2X difference is going to be a hard battle to fight. But at least it would allow Apple to win key areas in Education market where TCO and Cost are not as stringent.
May be Apple will do just one more Final update for some Intel Mac like Mac Pro ( At least I hope they do for those who really need an x86 Mac )
Oh M3 in 2022, Still within the 2 years transitional period, I think we are going to see a 3nm monster Chip for Mac Pro. When Intel is Still on their 10nm. And I think 2022 is when we will see an Apple console. Cause I dont think the Mac Pro Monster SoC volume is enough for its own investment. Some other product will need to use that, and Game Console seems like a perfect fit. ( At least that is how I could put some sense to the Apple Console rumours )
Simpler ARM ISA has advantages in very small / energy efficient CPUs since the silicon translation logic can be smaller but this advantage grows increasingly irrelevant when you are scaling to bigger, faster cores.
IMHO these days ISA implications on performance and efficiency are being overstated.
Desktop CPUs differ from the mobile CPUs mainly in how much can they boost more/all cores.
"Don't upgrade MacOS to x.0 version" is already a common idea. Why would it be any different for their hardware?
Iphone helped clarify what a good interface looked like while prices came down and performance went up positioning themselves well as a product category that was already a thing became mainstream.
Laptops aren't a new category and the majority will continue to buy something other than apple in large part because of the price.
Because hardware and software are very different. The M1 is the next stage of Apple’s A series of SoCs—and they've shipped over 1.5 billion of those. I’d like to think all of the R & D and real world experience Apple has learned since the A4 in 2010 has lead to where we are today with the M1.
If anything, this simplifies things quite a bit compared to using an Intel processor, a Radeon GPU (on recent Macs with discrete graphics), Intel’s EFI, etc. This transition has been in the works for several years and Apple knows they only get one shot a making a first impression; I'm pretty sure they wouldn't be shipping if they weren't ready. I’m not concerned in the least about buggy hardware. They just reported the best Mac quarter in the history of the company; it's not there's pressure to ship the new hotness because the current models aren't selling [1].
The release version of Big Sur for Intel Macs is 11.0.1 and I've been running it for 2 days now. It's been the smoothest macOS upgrade I've done in a long time—and I've done all of them, going back to Mac OS X Public Beta 20 years ago.
[1]: https://www.theverge.com/2020/10/29/21540815/apple-q4-2020-e...
Isn't the M1 fabbed on TSMC 5nm? Zen 3 is on 7nm. If a Zen 3 APU will run close to Apple Silicon I will be mightily impressed.
Generally, people are absolutely terrible at taking long term effects into account. I don't think many people are going to think twice about giving up their computing freedom.
But I think Apple's positioning as premium brand is going to ensure that open hardware keeps existing. And maybe we can even look forward to RISC-V to shake the CPU market up again.
Noooo, besides simply copying instructions 1-to-1, the process is way to involved, and imposes 40 years old assumptions on memory model, and many other things, which greatly limits the amount of way you can interact with the CPU, adds to transistor count, and makes making efficient compilers really hard.
(that was sarcasm. My take is this performance is impressive but you should not be surprised if it does not completely outperform CPUs that should be less efficient)
>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
* 3800X (105W desktop) scores 2855
* 4900H (45W mobile) scores 2707 or 95% of 3800X
* 4750U (15W mobile) scores 2596 or 91% of 3800X
Doing it while not burning lots of Watts and being energy efficient is what Apple aims for.
And I doubt that AMD and especially Intel will offer an alternative here soon. Desktop yes, but not on mobile.
Rather i suspect that the main benefit that M1 has in many real world benchmarks is that it has on-chip memory, cache-miss latency is a huge cost in the real world (why games has drifted towards DoD internals), so sidestepping that issue to a large extent by integrating memory on-die gives it a great boost.
I'm betting once they've reverse engineered the M1 perf, we will see multi-GB caches on AMD/Intel chips within 4 years.
Not on Mac they don't. macOS isn't tied to the App Store in the same way that iOS devices are, and it probably accounts for a tiny percentage of third-party Mac software sales by value.
I however cannot find anything that says differently from apple or a source showing how non signed systems can be booted on this chip.
The only thing I could find was apples statement that your system is even more secure now because non signed code won't be run.
Do you have any resources I can read so we can clear up this misunderstanding?
Or are you referencing my auto-correct error which replaced cant with can? If that is the case... I'm sorry for that but it's too late to fix and my intent is (I think) quiet clear considering I said that they're both locked and this lock is without an off switch.
Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.
Lastly, I do simply see it as a bit of a false dichotomy (or whichever fallacy is more accurate) to suggest that by using a mac that can't run other operating systems, you're giving up computing freedom. If I found it necessary to have a Windows or Linux machine, I'd simply just go get something that probably has better hardware support anyway. Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.
The 16" MacBook Pro is only available with a discrete GPU, which I don't need but causes me tons of issues with heat and fan noise. The dGPU has to be enabled to run external monitors, and due to an implementation detail, the memory clocks always run at full tilt when the resolution of driven monitors doesn't match, resulting at a constant 20W power draw even while idle.
In the market, I think M1 systems will not alienate Apple-app-only users (Logic, Final Cut, Xcode-for-iPhone development) and may attract some purely single-page-application users.
Mostly, Zoom call efficiency will drive its broader adoption this year among the general population. If the Air is fast, quiet, and long lasting for Zoom calls, it will crush.
I won't buy one. I have a 32GB 6-core MBP that will satisfy my iOS dev needs until M2 (and a clearer picture of the transition has developed). But I might start recommend Airs to the folks sitting around our virtual yule log this year.
This cannot be implemented in AMD's current 7nm process due to size restrictions.
The SoC-side of the story is also contrary to the very core design of a general purpose CPU. RAM, GPU, and extension cards for specialised tasks are already covered by 3rd party products on the PCIe and USB4 buses and AMD has no interest in cannibalising their GPU and console business...
With their upcoming discrete GPUs and accelerator cards, Intel might be in the same boat w.r.t. SoC design.
> Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.
This could easily devolve into a "to Mac or not" type of discussion which I don't want delve into, but I've personally never used a Mac (I have tried it) and I don't feel like I'm missing out because of it. Certainly the freedom to run any software and not be beholden to a large corporate interest is more important to me.
> Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.
Yes, precedent, but also increased market share if they were to become more popular. One day, an alternative might not exist if we do not vote financially early enough. Therefore, my immediate urge is to say: no, I do not want to participate in this scheme. Make your hardware open or I will not buy it.
Apple is already doing quite well in the low-end education market with the base model iPad. These are competitive with Chromebooks on price. They also do a better job of replacing paper with Notability or GoodNotes and open up project opportunities with the video camera. Most kids seem to be fine with the on-screen keyboard, but that part is not ideal without an external keyboard/keyboard case.
I'm probably not the first or last to suggest this but... it seems awfully tempting to say: why can't we throw away the concept of maintaining binary comparability yet and target some level of "internal" ISA directly (if intel/AMD could provide such an interface in parallel to the high level ISA)... with the accepted cost of knowing that ISA will change in not necessarily forward compatible ways between CPU revisions.
From the user's perspective we'd either end up with more complex binary distribution, or needing to compile for your own CPU FOSS style when you want to escape the performance limitations of x86.
Source:
[1]: https://browser.geekbench.com/processors/amd-ryzen-7-3700x
[2]: https://browser.geekbench.com/processors/amd-ryzen-7-3700u
Back then, Intel was still betting on Itanium. It was a time when AMD was ahead of Intel. Wintel lasted longer, and its only since the smartphone revolution they got caught up. In hindsight, even a Windows computer on Intel gave a user more freedom than the locked down stuff on say iOS. OTOH, sometimes user freedom is a bad thing, arguably if the user isn't technically inclined or if you can sell a locked down platform like PlayStation or Xbox for relatively cheap (kind of like the printer business).
I'm sure other people can add to this as well. :-)
Renoir is 7nm Zen 2 aka the 4000 series. https://en.wikichip.org/wiki/amd/cores/renoir
Matisse is also 7nm Zen 2 aka the desktop 3000 series. https://en.wikichip.org/wiki/Matisse
Picasso is 12nm Zen+ aka the mobile 3000 series. https://en.wikichip.org/wiki/amd/cores/picasso
There is a social experiment about that, running since at least 2007. It's the smartphone and the tablet. I think I don't have to detail it and all of us can assess the benefits and the problems. We could have different views though.
By the way, I wonder if the makers of smartphones hardware and/or software could do all of their work, including the creation of new generations of devices, using the closed systems they sell (rent?). I bet they couldn't, not all of their work, but it's a honest question.
Also notice this result is using clang9 while the MacBook results are using clang12. I assume clang12 has more and better optimizations.
* 3800XT = 1357 (100%)
* 4800H = 1094 (~80%)
* 4800U = 1033 (~76%)
I would expect a 5800U to score at best around 1500, but realistically closer to 1300-1450. That's still behind the M1, but pretty darn close for being behind a node (and will still probably be faster for applications that would require x86 translation).
I understand you are being sarcastic, but no, that's not what's not what I'm saying.
It is Apple Silicon that is faster (at least on paper). I'm saying I think even though AMD will have worse perf/watt, I think it will get impressively close despite it's less efficient fabrication process.