[1] https://browser.geekbench.com/v5/cpu/search?q=MacBook+pro+16
You can see all Air results so far here: https://browser.geekbench.com/v5/cpu/search?q=MacBookAir10%2...
I'll be interested to know how much active cooling and binning real do for Rustc or LLVM compilation times.
And the GPGPU: https://browser.geekbench.com/v5/compute/compare/1800719?bas...
1.5x single-core perf.
M1 MacBook Pro vs Intel MBP (top specs) show same performance: https://browser.geekbench.com/v5/cpu/compare/4652718?baselin...
Likely because GB5 doesn't run long enough to trigger thermal throttling on the M1 MBA.
M1 is beating all CPUs on the market in single-core scores: https://browser.geekbench.com/processor-benchmarks (M1 at 1719, vs AMD Ryzen 9 5950X at 1628).
Anandtech on the memory-affinity of GeekBench vs SPEC:
> There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded. The only factual differences between workloads in SPEC and workloads in GB5 is that the latter has less outlier tests which are memory-heavy, meaning it’s more of a CPU benchmark whereas SPEC has more tendency towards CPU+DRAM.
For what it's worth, I have a fully specced out 16 inch MacBook Pro with the AMD Radeon Pro 5600m and even with that I'm regularly hitting 100% usage of the card, and not to mention the fan noise.
Looking forward to a version from Apple that is made for actual professionals, but I imagine these introductory M1 based devices are going to be great for the vast majority of people.
Intel is now #3
Hopefully they'll reverse that decision. (or their GPUs in the higher end machines will have to be really good)
That said, I am definitely waiting for at least one generation to pass before I jump on the train.
I’ve been through this before. It will be great, but Apple is a master at smoke & mirrors. Things will not go as smoothly as the sizzle reels make it seem.
Also the i7-1165G7 is a 12-28w part, configurable by the OEM. I'd assume the XPS 13 is running it at top spec, but that'd also need validation.
I'm just making sure everyone understands that it does not mean Air will have the same real life performance as Pro or Mini. If you compare top MacBook Air 2020 (Intel)[1] with lowest MacBook Pro 2020 (Intel)[2] their results are almost identical, but their real life performance was not even close - Air starts to throttle after just few minutes of work. At the moment there is no reason to believe that Apple's CPU will behave differently (after all, the fan is in Pro and Mini for a reason).
[1] https://browser.geekbench.com/v5/cpu/4652268 [2] https://browser.geekbench.com/v5/cpu/4653210
https://developer.apple.com/documentation/apple_silicon/abou...
The more interesting thing is the power efficiency, which doesn't have that much impact on single thread performance because higher power CPUs don't actually use their entire power budget for a single thread. But that's an impressive multi-threaded score for that TDP. It gets stomped by actual desktop CPUs for the obvious reason, but it has better multi-threaded performance than anything with the same TDP. Though that's also partially because the low-TDP Zen 3 CPUs aren't out yet.
What I'd really like to see is some benchmarks that aren't geekbench.
Everyone is on Xcode these days and using much higher level frameworks. Adobe and Microsoft already announced early 2021 availability of native binaries.
Besides, Rosetta 2 is even more impressive than the original was. I’m betting it will be a breeze. 6 months in and almost everything will be native.
Check out for example the per core power charts that Anandtech does: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...
Compare for example the 1 core power numbers between the chips. The 5600X 1 core result is 11w @ 4.6ghz, whereas the other two chips boost higher and hit 4.8-4.9ghz 1 core turbos, but it costs 17-18w to do it. Huge increase in power for that last 1-2% performance. So you really can't or shouldn't compare more power-concious configurations with the top end desktop where power is infinite and well worth spending for even single digit percentage gains.
And then of course you should also note that the single-core power draw in all of those is vastly lower than their TDP numbers (65w for the 5600x, and 125w for the 5800x/5900x).
https://www.realworldtech.com/forum/?threadid=185109&curpost...
> fun fact: retaining and releasing an NSObject takes ~30 nanoseconds on current gen Intel, and ~6.5 nanoseconds on an M1
> …and ~14 nanoseconds on an M1 emulating an Intel
https://mobile.twitter.com/Catfish_Man/status/13262384342355...
I won't be replacing my workstation this weekend but then, I won't be updating it to Big Sur just yet either. I am getting a new personal machine sometime in the next year though I don't imagine I'll be buying an Intel Mac.
https://browser.geekbench.com/v5/cpu/compare/4654605?baselin...
Opinion: Pretty and impressive story
The reason that graph doesn't include the A14 Firestorm -> M1 jump was simply timing. We know the thermal envelopes of the M1 and the cooling designs. We now have clock info thanks to GB5. So yes, the data is pretty solid. No one's saying that the iPhone beats the Mac (or a PC) at performance when you consider the whole system. Just that the CPU architecture can and will deliver higher performance given the M1 clock, thermals and cooling. Remember that The A14/M1 CPUs are faster at lower clock speeds.
That's why A14 only runs at 1.8ghz base, 3ghz boost. That's how it has low power consumption. And similarly Intel pushing 5ghz is why it has high power consumption.
TSMC's 5nm will have a raw transistor performance/watt advantage, but it's not huge
It will be very interesting to see what the performance will be of the more "pro" chip that overcomes those limitations that they'd put in the 16" and iMacs
Doesn't seem very "pro" to me. The MBP16" intel has 4 x USB-c ports, can drive two monitors, and can have >= 32GB ram.
These types of results make me hopeful for the future of Apple hardware. I've always been a huge fan of OSX, but I was pretty sure this MBP would be the last Apple laptop I would buy. Looking forward to being wrong.
Mid-2015 15" MacBook Pro vs MacBookAir10,1 https://browser.geekbench.com/v5/cpu/compare/4643216?baselin...
I’d love to see one of these go head to head compiling a large project etc.
Here is the image: http://www.cgchannel.com/wp-content/uploads/2020/06/200623_A...
https://browser.geekbench.com/v5/cpu/compare/4306696?baselin...
Single core similar and multicore 50% better for MacBook.
Microsoft is working on enabling x64 emulation on ARM, it should roll out in preview this month[1]. I can see Windows 10 ARM-edition working inside Parallels with its own x64 emulation inside. The issue right now is that MS does not sell Win 10 ARM, it is available for OEMs only.
x86 emulation on Windows 10 ARM was already done few years ago, when MS shipped their Surface ARM notebook.
[1] https://blogs.windows.com/windowsexperience/2020/09/30/now-m...
Which makes it even more impressive.
Power/thermal management looks very good - low heat and long battery life without sacrificing performance.
Presumably the Air will have to throttle performance for some workloads, but not in this benchmark apparently.
https://browser.geekbench.com/macs/457
M1 is comparable to baseline Mac Pro on multicore performance and better on single core performance. And several thousand dollars cheaper (and smaller).
Everybody throws the term around and no two people have the same definition! What in the world is an actual professional? There are professional journalists that just need a browser and text editor. There are professional programmers working on huge code bases in compiled languages that do need a beefy machine, and there are professional programmers that just need a dumb terminal to ssh into a dev machine in the cloud.
And then of course what the largest subset of people seem to mean is professional video editors or content creators. What percent of the working population are video editors? Some tiny fraction, how did that become the default type of professional in the context of talking about computers?
And then a lot of things that people also complain about like how replacing the wider variety of ports with usb c or thunderbolt is contradictory on a “professional” machine also don’t really make sense. Professionals can use dongles like anyone else. In fact many professionals will have more specific needs that require a single a way, for instance having a builtin sd card reader doesn’t help a professional photographer using cfexpress cards.
https://ark.intel.com/content/www/us/en/ark/products/130411/...
Although it's worth remembering the m1 is just Apple's CPU for their low-end machines. They haven't announced their high-end 13" & 16" MacBook Pro, iMac and Mac Pros.
Also, I know Electron has a history of high-memory, but right now my Slack client is currently running at 500MB and my VS Code windows (3 projects) at < 1.5GB.
And even though my current MBP16 is spec'd at 64GB RAM, I don't think 16GB would probably feel all that slow these days.
Assuming they can build it (and they have implied that they can scale their silicon designs up in terms of cores, power, and clock rate), an Apple Silicon Mac Pro will be a pretty interesting machine.
If they wanted to, Apple could even bring back an Apple Silicon powered Xserve, or the legendary, mythical, modular desktop Mac (I know, now we're in the realm of pure fantasy, but one can dream.)
So can anyone comment on how we should interpret these benchmarks:
a) Across the various Mac models (i.e. why buy a pro vs the air vs the mini). If they all benchmark the same, what am I paying the difference for?
b) The M1 chip vs a ryzen desktop (naively interpreting this its punching up near a 5950X, which seems TGTBT?). How do these chips compare against the larger TDP competitors if portability and batteries aren't an issue?
b2) Compared to the AMD mobile chips? (i don't have much insight into these)
c) someone else on here posted a Tiger-lake intel chip benchmark beating the M1. Now that makes even less sense to me now, because now there's an intel mobile chip beating the top of the line Ryzen desktop?
Help out an honestly confused individual.
https://browser.geekbench.com/macs/mac-pro-late-2019-intel-x...
How is Apple so far ahead even with the same instruction set?
Given their performance/watt, this sounds like it could be potentially game changing.
it's unfortunately drowned out by the cpu throttling scandal on google, but, its well-known in ar dev (and if you get to talk to an apple engineer away from stage lights at wwdc) that you have to proactively choose to tune performance, or you'll get killed after a minute or two due to thermal throttling.
https://www.realworldtech.com/forum/?threadid=136526&curpost...
https://www.realworldtech.com/forum/?threadid=185109&curpost...
Personally I'm not convinced, until I see something like SPECint/SPECfp results.
It doesn't make much sense for the Macbook Pro to have a lower score than the Macbook Air.
Assuming it is sampling error?
https://appleinsider.com/articles/20/11/11/how-apple-silicon...
I’m not sure if it supports it immediately, but this isn’t a difficult change, so it will certainly come soon.
If you want to make an apples to oranges comparison that’ll really be the one to make.
Say what? I have a LG 5K and two 27” Apple Thunderbolt displays (four screens total including laptop display) hooked up to my 16” MBP and fans definitely are no where near full speed, unless I’m compiling or in a Google Hangout that is...
And it's doing this while using more than an order of magnitude less power (10W vs. a TDP of 125W for that Intel part).
That's stunning.
Intel, Qualcomm, Microsoft - all have to build products that work for the lowest common denominator. Loss of focus is a major problem.
Apple has a handful of products. One OS. One developer platform.
This kind of agility is extremely powerful. They can switch fabs whenever it makes sense. They can switch ISAs whenever it makes sense.
Contrast with Microsoft, that has to support so many hardware platforms. They’re not helping themselves with so many software frameworks - Win32, WinRT, .NET, MFC, WinJs? I’ve lost count.
Intel is handicapped stuck to their process nodes.
Qualcomm, while they’ve effectively captured the mobile SOC market, they too have the same problem. They can’t control what handset makers do. So they can only go so far.
Apple can make a single CPU core and mix and match that with variations. Things get a lot easier if you just have to deal with yourself e2e - even as far as retail sales.
The Vega II is even faster (but quite a bit more expensive).
Apple is ahead because they have more money than Arm, Qualcomm, Samsung, etc. combined.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
I’m also interested in seeing what M1 can do once people get their hands on real hardware running on Mac OS where so many more details can’t be hidden like in iOS, but all signs point to it being an absolute monster.
Docker still doesn't run on Apple Silicon Macs, so the migration path is already disrupted here.
Well, that is an easy question - actual professionals are people involved in making, marketing and distributing porno
And that's just web design.
Now open your eyes and look around your room. Everything you see was designed by someone, most likely on a Pro.
For GCC targeting arm64 macOS, the dev branch is at https://github.com/iains/gcc-darwin-arm64/ currently.
For Rosetta, everything runs except virtualisation apps.
But this is different. There's the convenience factor here. How far does M1 have to go before commodity DL training is feasible on commodity compute, not just GPU?
Look at N64 emulation, for example.
Interesting times.
This is an interesting thing to say considering that 1. most apps on iOS use large dynamic libraries and 2. Apple silicon Macs run on 16K pages, just like iOS.
I think it’s because youtubers like linustech etc are the type of people who review these laptops, they live in their bubble that only “real” professionals are those who edit videos.
The point is not a professional using a computer, it's a professional computer user - someone who doesn't just use a computer to do their work, but someone for whom the limits of the computer are the limits of the work they can do.
Current (and presumably future) macOS does this and you can’t turn it off, except with Little Snitch. New APIs in macOS 11 means that Little Snitch will no longer be able to block OS processes, so it will require external network filtering hardware.
I’ll likely end up with Linux on the 16”, and use the new one for things that are not secret/private.
Last time I read, Windows ARM version isn't ready for everyone yet not to mention there's no transition kit at all for everyone to move.
Why would anyone use Windows anymore unless your infra site needs IE?
Single customer. Apple can optimize directly for iOS workload and consider nothing else.
Intel sells into the general market and has to hit sometimes conflicting goals so that Dell, HP, Lenovo, etc. will all buy their chip.
e.g. In this definition, if you use a PC at your office and a laptop at home for non-work stuff, your work PC is being used by a "professional" and your home laptop isn't.
I like the bigger screen, so I'll hold out on this platform for now. Pretty impressed with where the M series is going, though; might hold out two iterations instead of the four I had in mind before the M1 dropped.
[1] https://www.extremetech.com/computing/315733-64-bit-x86-emul...
[2] https://www.microsoft.com/en-us/surface/business/surface-pro...
Now, I can say hey if you get one of the new macbooks / mac minis, you can run your iphone apps natively. He probably will be one of their first customers.
I wouldnt be surprised if there was a huge 'professional' market that is untapped. not the traditional professional market but the ones which want something as simple & familiar as a iphone, on a desktop.
In this context, a professional is someone whose productivity is limited by the power of the computer. A developer compiling large code base on their computer. A person doing video editing. These are examples. And maybe “professional” is the wrong term, but I think that is what people are aiming for when they use that word in this context.
I wonder if that includes AirPlay screens, or just wired.
People also regularly misplace their importance and prevalence in industry. For instance, you see more Linux than Mac in the big color and VFX houses.
It's important to trust and value your tools, which is how prosumers generally feel about their Macs, and they do make nice frontends for the computers that perform the actual work.
But regardless of professional requirements, "Pro" in Apple's product line just means "the more expensive slightly better version." Nobody's arguing that AirPods Pro are the wireless earbuds of choice for people who make money by listening to wireless earbuds.
They get outsized influence because they're the ones that make the shiny youtube reviews (using their video editing skills).
https://www.microsoft.com/en-us/surface/business/surface-pro...
But Apple absolutely dominates:
https://browser.geekbench.com/v5/cpu/compare/4648107?baselin...
For most people their ability to use the computer is the blocker. This also involves the ability yo make sane file format/compression decisions if they work with graphics or video.
Last thing I need is a MacBook Air equivalent with an unnecessarily loud and annoying fan.
That's the wrong conclusion to make. For instance, the Lenovo ThinkBook 14s (with a Ryzen 4800u) with a 15W TDP posts the same Geekbench multicore scores [1] as the M1 Macbook. But the ThinkBook isn't in any way faster than the top-end iMac for real world compute intensive tasks.
The M1 certainly looks efficient, but there's little you can conclude from a single benchmark running for a very short period of time.
As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
It's been a running joke in corporate for years that Apple's "premium fan noise" is a brilliant branding move because you can identify the Mac users as soon as they unmute.
I suspect the M1 powered Macs will be hugely successful and very useful for multiple types of users.
It's not really Apples to Apples (even if it is in name), so to speak.
The fanless Intel Core-M CPUs could post excellent benchmark scores (for its time). But if you give it a lengthy compile task, it'll slow down dramatically.
AMD Radeon Pro 5600m memory bandwidth is 394.2 GB/s (2048 bit HBM2)
https://www.techpowerup.com/gpu-specs/radeon-pro-5600m.c3612
I'm a Sofwware Engineer / pro photographer / videographer.
I sling code from time to time, edit thousands of RAW files from my cameras and edit together 1080p footage day in and day out.
I did that for years on a 2013 MBA with 8GB of RAM. Now I have a 2015 MBP with 16GB of RAM. It's perfectly adequate.
So happy Apple is dumping Fantel chips
Back in the day I had 2 Sun 20" GDM20E20 (1997) which was major $$ and after then I alway had 2 monitors, moving at some point to a single ultra wide LG (which are pretty neat). One day I looked at my setup and how I used it and realized I did not look at all of the screen. I swapped it for a small single Apple LG 4K and it turns out I am very happy. The dense nature of the 4K was a game changer. I plan on getting an 8K when it comes out.
https://appleterm.com/2020/10/20/macos-big-sur-firewalls-and...
Looks like it will be impossible to use Apple Silicon (without external network hardware) without revealing your track log to the CIA. How cool is that?!
M2, M3... that is when I think we will see stellar performance against things like Ryzen.
- All M1 models only have 1 Thunderbolt Controller, thus can only handle 2 Thunderbolt ports on all announced models so far.
- All M1 models only support 1 monitor, but up to 6K.
- No M1 model supports >16GB of RAM or 10GB Ethernet.
All of the above seem like bandwidth limitations to save cost that a future "M1X" or "M2" or "X1" would be extremely likely to fix, and that's where you'll see the bandwidth increase.
If anything, it's a bonus to have the fan so your can have prolonged boost performance while the MBA chip will throttle under continuous load.
It's almost certainly better per watt, which I'd expect because the 5950X (and the 6-core 65W TDP 5600X, which also tops the MBA multi-core Geekbench result) are still desktop processors.
Given how powered up the Air has become, this is a thin envelope.
You can certainly choose to view this as a step towards bolstering the walled garden... but my money is on Apple wanting to innovate and knowing that they can. Vertical integration does a lot for efficiency, cost, etc... as well and not just keeping people in your walled garden.
People forget: Apple has been shipping their own silicon for years. If you had stellar cpus you could make in house, wouldn’t you rather use them instead?
It's an apples and oranges comparison. The M1 GPU is an integrated solution targeted at the cheapest Macs. The 5600M is external GPU targeted at $4000+ top of the line Macs.
There's no reason that higher end Apple silicon machines can't also use external GPUs like the 5600M (after development, of course).
I think I would rather take a small performance hit and some heat than have Apple quick to pull the trigger with a fan blasting noise as I’m trying to focus (if yield comparable stats).
Will wait for more info. If this chip is really that much more efficient hopefully we are back to the good old days where MBP fan is tolerable. Otherwise I’m all in on Air
I’m excited for whatever is next.
Actually, very, very few tasks on a desktop can stress a CPU enough to saturate the bandwidth on a single task.
Proper HPC programs written to have absolutely zero cache misses can. A javascript eating 1GB of RAM to show some text, and pictures, cannot.
Video editing's all done on disk isn't it? It's not like editing programs are loading a 20 GB file into memory?
Even just 8 GB has never given me any problems with video editing.
What am I missing here?
We could even compare some cross platform apps across both OS and cpu and see how the total package performs.
The Gravitons are based on Cortex-A76 aren't they? Don't phones with that architecture benchmark similar to an Apple A10?
The 5950X cores are actually reasonably power efficient. Anandtech has nice charts here: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...
TL;DR is that 5950X cores draw about 6W each with all cores loaded at around 3.8GHz per core. They scale up to 20W in the edge case where a single core is loaded at a full 5GHz.
> And the M1 is running at a lower clock.
Comparing a power-optimized laptop chip to an all-out, top of the line desktop chip isn't a great way to compare efficiency because power scaling is very nonlinear. The AMD could be made more efficient on a performance-per-watt basis by turning down the clock speed and reducing the operating voltage, but it's a desktop chip so there's no reason to do that.
Look at the power consumption versus frequency scaling in the Anandtech chart for the 5950X: Going from 3.8GHz to 5.0GHz takes the power from about 6W to 20W. That's 230% more power for 30% more clockspeed. Apple is going to run into similar nonlinear power scaling when they move up to workstation class chips.
If you really wanted to compare power efficiency, you'd have to downclock and undervolt the AMD part until it performs similarly to the Apple part. But there's no reason to do that, because no one buying a top of the line 5950X cares about performance per watt, they just want the fastest possible performance.
Comparing to an upcoming Zen3 laptop chip would be a more relevant comparison. The Apple part is still going to win on power efficiency, though.
The air is basically 50% faster... incredible.
EDIT:// sorry, i misread/skip the "chip" part.
I would say that generally, a "professional" user of pretty much any tool, is someone for whom the tool's quality is a constraint on their professional productivity.
A professional paint user is an artist. A professional telescope user is an astronomer—or a sniper. A professional typewriter user is a stenographer. A professional shoe user is an athlete.
In all these cases, it's the quality and innovation in the tool, that's holding these professionals back from being even better at their job than they already are.
Also, take special note of the case of the stenographer: professionals often require special professional variants of their tools, which trade off a longer learning curve for a higher productivity ceiling once learned. A stenographic keyboard takes years to learn, but all non-stenographic keyboards cap out at 150WPM, while stenographic keyboards allow those trained in their use to achieve 300+WPM.
And to make one more point: a professional car driver isn't a race-car driver. A professional car driver is a chauffeur. Rolls-Royce's cars aren't famous for how luxurious they are to drive; they're famous for having all the amenities needed by professional drivers — chauffeurs — to allow them to efficiently cater to their clients' needs. Limousines are the same kind of "professional tools" that stenographic keyboards are: they increase chauffeuring productivity.
> How did [video editing] become the default type of professional in the context of talking about computers?
Because all tech vloggers and most tech pundits — the people who review tech — edit videos as part of their jobs, of course ;)
The difference being that Apple only sells theirs inside of $1000+ computers, and AMD has to make up the entire margin on their CPU alone.
Behold, the power of vertical integration.
Apple, it's been good knowing you, time to move on.
Apple is a vertical. They own their market, so the investment has more predictable returns.
Many use cases, especially when it comes to microservices can almost always better served by something else
Apple often has 2-3 future generations in development. This was just the first complete design they turned into a product.
That RAM design, tho...
Here's a 173 page thread on MacRumors about it: https://forums.macrumors.com/threads/16-is-hot-noisy-with-an...
It'd be more surprising at this point if it _wasn't_ more powerful.
For laptops, you can use an external dongle. It has its uses, like for a NAS on a 10G network for video or other data-intense work.
You can check the clock speeds: https://browser.geekbench.com/v5/cpu/4620493.gb5
Up to 5050MHz is stock behavior for the 5950X and it's using standard DDR4 3200 memory.
Doubtful. You know they've been using ARM-based Macs with the requisite version of macOS for at least a year inside of Apple.
They've done a processor transition two other times; unlike the last two times, this time Apple controls the entire stack, which wasn't the case going from 68K to PowerPC or from PowerPC to Intel.
Apple has been designing their own processors for a decade now. There's nothing in the smartphone/tablet market that even comes close to the performance of the A series in the iPhone and iPad; there's no reason to believe this will be any different.
I had the same problem, when connected to a USB-C monitor I wanted to use the keyboard, but not the built in monitor. Even with the display backlight off the fan would still run. After a lot of searching I found that you can disable the built in monitor by:
- Booting into recovery mode - Opening Terminal - Entering `sudo nvram boot-args="niog=1"` - Restarting - Close the clamshell - Plug in the external monitor - Open the monitor
I hope that helps.
Yes, the main computing constraint of mobile devices is heat management (This doesn't really reflect the CPU but the complete device. Putting the CPU in a more ideal setup like a traditional desktop or water cooling will improve the CPU's performance in longer tasks)
I'm testing on is the 2019 core i9 2.3 ghz. Apple "Genius" bar tells me this is expected, no replacement in warranty. Maybe the 2020 is better, but after getting this as an answer I wont be buying another apple macbook.
The prior 2018 Core i9 Apple MacBook Pro had previous issues with insufficient power supply to the CPU core (this might be fixed, but I can't even sustain it due to thermal throttling, so I can't really tell).
Think about it: the M1-based MacBook Air has better CPU and graphics performance than Intel-based laptops—including Apple’s—that are targeted at professionals.
[1]: https://daringfireball.net/2020/11/one_more_thing_the_m1_mac...
From Apple’s somewhat uninformative slides, we should expect peak power draw around 15W? The A series draws half that for > 90% monocore performance, so scaling seems on target.
Where did you find this? In the Macbook Pro 13 landing page I find the specs per Thunderbolt port which states the display capabilities for each while not restricting anywhere to just one monitor.
So I'd would expect 1x monitor per Thunderbolt port leading to total 3 displays for the Pro ARM model: one internal and two external displays. I assume the restriction you refer to just applies to the Air model.
- Given that you can't add ram after the fact and 256GB is anemic the cheapest laptop that is a reasonable choice is $1400.
- The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.
- The average end user spends $700 on a computer
- We literally have marketing numbers and a worthless synthetic benchmark.
I think it entirely fair to say that the new macs are liable to be fantastic machines but there is no reason to believe that the advent of apple cpu macs marks the end of open hardware. Were you expecting them to sell their cpus to the makers of the cheap computers most people actually buy?
Apple has been running a version of OS X on these CPUs for 10 years now. The only thing which is "beta" here is Rosetta.
I'm not saying they do that, considering how much their products cost, I'm saying they could. That's what vertical integration brings to their table, above all else.
This includes a massive number of corporate desktops which often Apple doesn't really compete with.
> The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.
?? The Mac mini is $600 with an M1 which is likely a far faster computer than most $600 Windows desktop computers. Likely significantly faster.
I don't think Apple is going to eat Windows alive, too many businesses have massive piles of Windows apps. I do see the potential Apple to increase market share significantly though.
I wouldn’t expect them to sell their cpus to others.
It’s weird though that they’re so vertically integrated and able to push performance as high as they have. I really enjoy my Linux system so I’m going to keep on doing that.
And also with RAM and SSD idiotically soldered in so 2 years later you need to spend another $6000, while a couple weeks ago I spent a grand total of $400 to upgrade my 2TB SSD to 4TB.
And then there is the OS which is getting more and more locked down so that you can not run unsigned software without increasingly difficult workarounds.
On one hand, alternative OS support on macbooks has gotten worse and worse over the last few years but it is sad to see the final nail in the coffin.
Like secure boot, just without an off switch
Not really. The M1 may objectively and factually be a very good CPU, but it comes bundled with the cost of being locked into a machine with a locked bootloader and not being able to boot any other OS than MacOS.
And many people will find such a cost unacceptable.
Nope, he pretty much died right on time in terms of median life expectancy for his form of cancer: https://pancreas.imedpub.com/prognostic-factors-in-patients-...
Two and a half times higher cost to build a slower, more power hungry CPU is not actually very similar.
This is the first of their CPUs. The iMac will almost certainly be running a higher end CPU which at the very least supports more RAM. It's likely the 16" MacBook Pro and the higher end 13" MacBook Pro will share a CPU with the iMac the same way the Mac mini and the MacBook Air share a CPU.
Obviously it depends greatly on what kind of software you are writing, but at all my dev jobs, I eventually end up waiting for some amount of time while the CPU heats the room up.
Notably, VIM is inevitably more gentle on my CPU than VSCode and similar.
(And the small storage, but that can be remedied with a NAS or other external storage, and then fast local scratch space only needs to be big enough for a couple of projects at a time)
https://www.apple.com/newsroom/2020/06/apple-announces-mac-t...
Most of the working population could get away with using basically any computer - if you're marketing a computer as 'Pro' you're talking to a specific subset of that. (Or you're just banking on it working as aspirational marketing and don't actually care about professionals)
> Simultaneously supports full native resolution on the built-in display at millions of colors and:
>
> One external display with up to 6K resolution at 60Hz
Pro to me = Provides more power. That's it.
Looking forward to the next MBP
I'm hoping the rest of the PC market benefits from this increased market for ARM compatible desktop software. ARM has less of an issue with patent encumbrance and more competition. Desktop CPUs could get a lot cheaper in the future if ARM Windows and ARM Linux get more use as well as Mac.
(In case you didn't know, you can run ARM Windows now (sort of unofficially, you'll need to google)! App support is a bit spotty, but it can also run 32 bit x86 apps through a translation layer (why it's limited to 32 bit I don't know, I guess it's easier to do?))
Edit: The Switch is ARM too, so there's a reason for some AAA games to come to the otherwise small market too.(for traditionally Console/PC games, I'm aware mobile gaming is a huge market, it's just not one I find to produce much quality output)
Programmers have to deal with compilation wait times, lack of RAM slowing down workflows, network latency, etc. We have to wait for the computer to do their job.
I await the day that I won't have to deal with this and computers would be as fast as our train of thought(human input latency nonwithstanding). But I won't hold my breath.
The problem here is that in WWDC, Apple showed Docker Desktop running on an Apple Silicon system (maybe suggesting that it is at least running) and here we are in November it is still not running or not known if Apple Silicon is supported.
I don't see any patches in the Docker repositories on such support, thus maybe Apple has a private fork for Apple Silicon. In general it is not available to us and we don't know when it will be.
Blackberry was the competing “smart” phone [1] and the newest releases were we under half the price of iPhone w the same 2-year discount.
I had the blackberry curve myself at that time and iPhone seemed way high-priced.
[1] https://techcrunch.com/2007/07/25/iphone-v-blackberry-side-b...
Now things are settled a bit I thought may be it isn't as bad as I thought . Had the MacBook Air Priced any lower, it would have seriously hurt their sales of 16" MBP. Once MacBook Pro transition to ARM, with a rumoured of Mini-LED Screen refreshed as 14" and 16". ( MingChiKuo has been extremely accurate with regards to Display Technology used on iPad and Mac ) So MBP wont be lower in price but offer more features ( Mini-LED is quite costly ). And possibly an M2 with HBE? I am not sure how Apple is going to coupe with the bandwidth requirement. It would need to be LPDDR5 Quad Channel at 200GB/s or HBM2 if we assume M2 will double the GPU core again.
May be only then Apple could afford to offer a MacBook 12" at $799. And educational price at $699. Although I am not sure if that is enough, Chrome Book in many classes are going at $299. Apple doesn't have to compete dollar to dollar in pricing, but 2X difference is going to be a hard battle to fight. But at least it would allow Apple to win key areas in Education market where TCO and Cost are not as stringent.
May be Apple will do just one more Final update for some Intel Mac like Mac Pro ( At least I hope they do for those who really need an x86 Mac )
Oh M3 in 2022, Still within the 2 years transitional period, I think we are going to see a 3nm monster Chip for Mac Pro. When Intel is Still on their 10nm. And I think 2022 is when we will see an Apple console. Cause I dont think the Mac Pro Monster SoC volume is enough for its own investment. Some other product will need to use that, and Game Console seems like a perfect fit. ( At least that is how I could put some sense to the Apple Console rumours )
Does the M1 performance have to be ramped down during sustained use due to exceeding thermal envelope of the fanless MBA? Of the fan’d MBP?
We’ll know soon enough!
And any thoughts and expectations around using the (fast?) M1 Mac Mini for web and java dev?
I'm right now getting frustrated with my company's lead times ordering a new HP Zbook laptop. (Its not like I particularly want the Zbook, its just that's what devs get here.)
But I'm a remote home worker, and I'm thinking "M1 Mac Mini available next week. Its like 1/3rd cost of a high end laptop and probably outperforms it, and is silent...?"
I was just calling out the idea that professionals being a small portion of literally everyone is a useful argument to make. How many markets aren't?
You need to be competitive on single thread performance to have a chance at datacenter. Amdahl's law is still very relevant. Up until very very recently, the CPUs were not up to par.
> The Mac mini with M1 chip that was benchmarked earned a single-core score of 1682 and a multi-core score of 7067.
> Update: There's also a benchmark for the 13-inch MacBook Pro with M1 chip and 16GB RAM that has a single-core score of 1714 and a multi-core score of 6802. Like the MacBook Air , it has a 3.2GHz base frequency.
So single core we have: Air 1687, Mini 1682, Pro 1714
And multi core we have: Air 7433, Mini 7067, Pro 6802
I’m not sure what to make of these scores, but it seems wrong that the Mini and Pro significantly underperform the Air in multi core. I find it hard to imagine this benchmark is going to be representative of actual usage given the way the products are positioned, which makes it hard to know how seriously to take the comparisons to other products too.
> When compared to existing devices, the M1 chip in the MacBook Air outperforms all iOS devices. For comparison's sake, the iPhone 12 Pro earned a single-core score of 1584 and a multi-core score of 3898, while the highest ranked iOS device on Geekbench's charts, the A14 iPad Air, earned a single-core score of 1585 and a multi-core score of 4647.
This seems a bit odd too - the A14 iPad Air outperforms all iPad Pro devices?
I was going off normal recommendations for video editing on x86 desktops/laptops. But it makes sense they'd go the extra mile on the software end to make it work on phones.
2020: "I'll build a Hackintosh, its slightly cheaper"
My XPS15 regularly throttles and sounds like it’s about to take off.
Simpler ARM ISA has advantages in very small / energy efficient CPUs since the silicon translation logic can be smaller but this advantage grows increasingly irrelevant when you are scaling to bigger, faster cores.
IMHO these days ISA implications on performance and efficiency are being overstated.
Locked bootloader only booting stuff signed by Apple.
So these CPUs can only be used to run MacOS, no Linux or other alternate open platforms.
Desktop CPUs differ from the mobile CPUs mainly in how much can they boost more/all cores.
"Don't upgrade MacOS to x.0 version" is already a common idea. Why would it be any different for their hardware?
iPad pro - the current 2020 gen iPad pro has A12Z (essentially the same chip as 2018 A12X with extra GPU cores) - significantly older chip than A14. I think there will be an A14 iPad Pro refresh with mini led display in early 2021.
So add dozens of tabs open in a couple browsers, maybe a VM or 2 and 16GB or ram is going to be insufficient.
If I’m a professional driver, but my passenger likes being discreet for example, then maybe I drive a Camry instead of a Rolls Royce. In your case, you probably don’t need a professional-grade laptop.
Me however, also a professional programmer, I run about 10 docker containers, a big ide, and lots of other hungry programs. I definitely am less limited when my computer is faster.
cf. Anandtech article from 2003:
- openjdk is being ported to the new hardware but it's not there yet. I'm not sure if it works with the emulator; or how well.
- if you use anything by Jetbrains, like I do, you'll likely have to wait for them to use that and package up a new release. It's a resource hog as it is and emulation is not going to improve things probably; assuming that works at all.
- things like android studio that rely on virtualization for running simulated phones will likely need upgrades too; but since the emulated hardware is arm that should be doable.
- things like docker which most server side developers like me won't work until Docker releases an arm version. Then the next problem is going to be that the vast majority of docker images out there are x86.
- If you are a web developer and want to test on something else than safari, you'll have to wait or accept worse performance of chrome, firefox, edge, etc. in emulated mode.
- pro hardware with >64GB; 16 GB is a non starter for me. I upgraded to that in 2014. I'd frankly want that dedicated to GPU memory in a modern setup. If you are a pro video or graphics user; I imagine those are not strange requirements. If your workflow involves third party tools and plugins that are x86; I'd wait a couple of years before even considering. Long term, this could be great; short term it's going to be rough depending on what you use.
- Also nice would be for things like opencl (Darktable) and games to work. I have a few steam games and X-plane. I expect that Steam Mac is going to be once more decimated. With the last release 32 bit games stopped working. Essentially all of the remaining games are x86 only. I look forward to seeing some benchmarks of how these are running under rosetta 2; but I don't get my hopes up. I'm guessing, Apple is looking to unify the IOS and Mac gaming ecosystem instead and is actively working to kill off the PC gaming ecosystem on mac. This will be an app store only kind of deal. Likewise, if they finally make moves in VR, it will be ARM and app store only.
So in short, for me it doesn't make sense to upgrade until most of that stuff is covered. None of it is impossible but it's going to take some time. I would be in the market for a 16" x86 mac pro but of course with this transition, long term support for that is very much going to be an afterthought. I might have to finally pull the trigger on a move to Linux.
i was hoping for a good ios opensource replacement, i guess now i'll soon have to wait for a good laptop competitor as well.
I feel like i'm stuck in a jail made out of gold.
They could do that because they've been selling overpriced products.
Iphone helped clarify what a good interface looked like while prices came down and performance went up positioning themselves well as a product category that was already a thing became mainstream.
Laptops aren't a new category and the majority will continue to buy something other than apple in large part because of the price.
Thin and light is great for short bursts of activity, but, when you need sustainable heavy usage, you'll need a bigger computer, even if it's just to have a bigger heatsink.
Because hardware and software are very different. The M1 is the next stage of Apple’s A series of SoCs—and they've shipped over 1.5 billion of those. I’d like to think all of the R & D and real world experience Apple has learned since the A4 in 2010 has lead to where we are today with the M1.
If anything, this simplifies things quite a bit compared to using an Intel processor, a Radeon GPU (on recent Macs with discrete graphics), Intel’s EFI, etc. This transition has been in the works for several years and Apple knows they only get one shot a making a first impression; I'm pretty sure they wouldn't be shipping if they weren't ready. I’m not concerned in the least about buggy hardware. They just reported the best Mac quarter in the history of the company; it's not there's pressure to ship the new hotness because the current models aren't selling [1].
The release version of Big Sur for Intel Macs is 11.0.1 and I've been running it for 2 days now. It's been the smoothest macOS upgrade I've done in a long time—and I've done all of them, going back to Mac OS X Public Beta 20 years ago.
[1]: https://www.theverge.com/2020/10/29/21540815/apple-q4-2020-e...
It comes from an era when computers were so genuinely slow that doing almost anything - even using page layout like Pagemaker, or setting up just a single track song in a DAW, had tons of render lag. Even spreadsheets would take significant amounts of crunch time to run their calcs, in the very early days. This meant that most work done on computers, with anything larger than a "toy/hello-world" dataset, was going to be painfully slow. That's why they called it "professional" - because you were actually using datasets large enough to burden the beast. Actually balancing a whole company's finances with a spreadsheet, rather than tallying up a little 10-item list.
I want to emphasize that "majoritarian" aspect of it - it wasn't just a few specific kinds, it was a majority of ALL kinds of work.
And that's changed.
We're now at a point where only a tiny minority of tasks done on computers actually have throughput limits based on the computer rather than the operator.
I still bet on the i9, but it'd be interesting to run a test.
By the way, “throttling” refers to CPU _slowing down_ despite cooling working at full capacity, so loud fans in itself isn’t one.
e: Another way to explain thermal throttling would be “thermal fading”, like brake fading on a car. Whether brake fading is considered a design fault or a feature that allow bursts of stronger brakes is semantic.
I ordered a 16GB Pro the other day to be my personal dev machine. I’m sure it’ll be more than fine. I’m upgrading from a 2013 8GB Pro which was only just starting to slow me down.
I've become something of CPU collector in recent years, and I have a nice line of p6 cpus from thePentium Pro -> Pentium 2 -> Pentium 3 -> Pentium M -> Core 2 that conveniently sidesteps those awful Netburst p4 CPUs.
It feels like this (p6+) microarch has finally run out of road and needs a rethink. What 'saved' intel was a change in philosophy rather than chasing MHz they chased power savings. And with Apple's new chips that history is repeating itself (and appears to be with a similar outcome).
It's an exciting time for hardware again because Intel and AMD are going to have to react to this and I think there's still legs to x86, it's survived everything thats been thrown at it so far...
Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?
You can disable the built in display on (some?) Intel MBPs via `sudo nvram boot-args="niog=1"` according to another poster. Whether this is supported on M1's remains to be seen.
For me atm thats a dealbreaker.. but I still want one
Isn't the M1 fabbed on TSMC 5nm? Zen 3 is on 7nm. If a Zen 3 APU will run close to Apple Silicon I will be mightily impressed.
Plus there's the brouhaha about Electron apps.
I for one really wouldn't mind if Apple would build a native app to replace Electron apps, e.g. a chat app that works as a good client for (what I have open right now in Rambox) Discord, FB Messenger, Whatsapp and multiple Slack channels. Or their own Spotify client. Or a big update to XCode so it can use language servers for VS Code and it's viable to use for (what I do right now) Typescript, PHP and Go development.
They have more than enough money to invest in dozens of development teams or startups to push out new native apps.
One day I'll switch away from Chrome in favor of Safari as well. Maybe.
(I am taking recommendations for native alternatives to apps)
as a power user I will not be touching anything apple ARM until all my hundreds of software apps are certified to work exactly the same as on x86_64. i will not rely on rosetta to take care of this. i need actual testing.
besides this, 8GB of RAM is how much a single instance of chrome uses. i run 3 chrome instances, 2 firefox and 2 safari. and this is just for web.
this could be a good time to jump the apple ship. it's pretty clear their focus is not their power users' focus.
as such i was looking into a lenovo thinkstation p340 tiny. you can configure it with 64gb ram and core i9 with 10 cores and 20 threads for less $$$ than what an underpowered 6 core mac mini is selling for.
There are enough people who do not want to deal with MacOS and Darwin regardless the hardware specs.
Also the way of least friction is usually to use whatever the rest of your team uses. There are even relevant differences in Docker for MacOS vs Docker for Linux that make cross platform difficult (in particular thinking about host.docker.internal, but there are certainly more). Working with C/C++ is another pain point for cross platform, which already starts with Apples own Clang and different lib naming conventions.
Going away from x86 makes this situation certainly not better.
That said, it's promising and I'm really curious to see where this development will lead to in a few years' time.
[1] https://www.anandtech.com/show/12689/cpu-design-guru-jim-kel...
Well yeah, every year for the last bunch or years the A series of chips have had sizeable IPC improvements such that the A12 based iPad Pros are slower than the new Air. Apple's chip division is just industry leading here.
So for things like software development where you compile frequently your projects, the new Apple computers are a little slower than similar computers with AMD CPUs.
So even when taking only CPU performance into consideration, there are reasons to choose other computers than those with Apple Silicon, depending on what you want to do.
Of course, nobody will decide to buy or not buy products with "Apple Silicon" based on their performance.
Those who want to use Apple software will buy Apple products, those who do not want to use Apple software will not buy Apple products, like until now, regardless which have better performance.
I see that statement a lot, and yes, at some point that is going to happen.
But the analysis seems to fail to take into account what utterly amazingly low power devices these chips are. So while it will happen, it might take a long time.
That's exactly the reason why you would chose Apple Silicon right now where you can choose between Intel and Apple SoC. There are of course other reasons such as battery life and price.
Apple is at day 1 of their two year migration to Apple Silicon. Your judgement seems not just a little premature.
I can see Unreal Engine not being available for Apple Silicon for quite a while. During the presentation of the new notebooks, Apple showed quite a bit of gaming. If there are no Unreal Engine titles for Apple Silicon, this could hurt them.
Nevertheless, because Apple has chosen to not increase their manufacturing costs by including more big cores, the multi-threaded performance is not at all impressive, being lower than that of many much cheaper laptops using AMD Ryzen 7 4800U CPUs.
So for any professional applications, like software development, these new Apple computers will certainly not blow away their competition performance-wise, and that before taking into account their severe limitations in memory capacity and peripheral ports.
I'm looking for those 2% computers "in the same class" (from the apple keynote 2 days ago) that are faster than these M1 laptops. From the looks of it they surpass their Apple store siblings, but how about x86 PC laptops? I'm currently shopping for my next dev machine, in the lighter side of the form-factor.
I've seen some name-dropping here and in other threads, like HP Zbook and Lenovo, but I'm completely out of the loop since I moved to a fully specced i7 2014 Macbook Air back in the day and.
https://www.reddit.com/r/macbookpro/comments/gs6bal/2019_mbp...
Generally, people are absolutely terrible at taking long term effects into account. I don't think many people are going to think twice about giving up their computing freedom.
But I think Apple's positioning as premium brand is going to ensure that open hardware keeps existing. And maybe we can even look forward to RISC-V to shake the CPU market up again.
Noooo, besides simply copying instructions 1-to-1, the process is way to involved, and imposes 40 years old assumptions on memory model, and many other things, which greatly limits the amount of way you can interact with the CPU, adds to transistor count, and makes making efficient compilers really hard.
I'm not an Apple fan, but the change in value is stunning. I don't need a new laptop currently...
You might be better served by wiping it and installing Linux though
But it's not really an Apples to Apples comparison.
Apple no longer sells computers. You can rent some shiny gizmo from them to run software of their choosing provided by people they deign to allow on their platform and in a manner they approve of¹. It's not really yours anymore.
¹ "But you can still do X". Well, and last year you could still do W, and the year before that V.
It is much more complete SoC then other procs which makes its performance even more impressive if this indications hold up, I am still very skeptical, nothing comes for free and the real world is a b
A walk in the part to anyone that had to deal with coding with C or C++ across UNIX flavours.
Anandtech's deep dive provides several examples of advances in Apple's core design that didn't involve magic or breaking the laws of physics. For example...
Instruction Decode:
>What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture. Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry. Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs
Instruction Re-order Buffer Size:
>A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure.
Number of Execution Units:
>On the Integer side, we find at least 7 execution ports for actual arithmetic operations. These include 4 simple ALUs capable of ADD instructions, 2 complex units which feature also MUL (multiply) capabilities, and what appears to be a dedicated integer division unit.
On the floating point and vector execution side of things, the new Firestorm cores are actually more impressive as they a 33% increase in capabilities, enabled by Apple’s addition of a fourth execution pipeline.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
The pro and the mini have fans so they should be able to maintain these speeds for longer than the air which will presumably have to throttle down as it heats up.
Seems pretty obvious to me that there will be another more higher end variant of the M1, though maybe the only difference will be the amount of RAM, the number of GPU cores, the number of supported USB4 ports or something like that, not raw CPU performance.
Either way, it seems obvious to me that the M1 is their low end Mac chip.
That will be interesting to watch.
> Rust just brought their arm support to the tier 1 support level
(for Linux)
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
And even the raw GPU processing power AMD Radeon Pro 5600M is 5.274 TFLOPS vs M1 as Apple rated 2.6 TFLOPS
It's a downgrade in GPU processing power for the user.
(that was sarcasm. My take is this performance is impressive but you should not be surprised if it does not completely outperform CPUs that should be less efficient)
>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
My resource hogs are slack, mainly the browser, and Zoom calls are apparently the most computationally intensive thing in the world, especially if you screen share while you have an external plugged in.
Memory wise the reason I had to go from 8Gb to 16GB on my personal laptop was literally just for TravisCI.
Honestly, adding external monitors cripples MacBooks pretty quick, even unscaled 2 2k monitors will slow a 2015 15 down significantly (don't try and leave YouTube on either), and it gets worse from there once you start upgrading to 4k monitors. a 2017 15 is good for a 4k and a 2k, and gets a bit slow if you try and go dual 4k.
I planned on looking into eGPU solutions until IT offered me a new Macbook, and I convinced them I needed a 16" Pro.
tldr: External monitors or badly optimized applications (Zoom, YouTube, or browser based CI) will make most MacBooks feel sluggish pretty quick.
Use Apple Music, Messages, Safari, Swift if you want first-class support.
Or one of the better options now might be to use the iOS apps for Slack, Spotify etc.
ARM doesn't have a generic platform like PC but I'm sure someone will figure out how the device tree works if they haven't already.
Another thing is that you can buy "gaming" laptop for $999. Something like i7-10750H with GTX 1650. And it's powerful enough to run almost any game on high to medium setting. Apple GPU is awesome compared to Intel GPU, but compared to dedicated Nvidia GPU - not so much. So if you need GPU for gaming, that's another area where Apple does not compete with their new laptops. At least for now.
Ultrabook with focus on portability and long battery life - Apple is awesome here.
In raw performance per buck you could always get a customer PC setup for cheaper, especially in desktop form.
In some countries, even a $300 laptop comes down to half a year's salaries...
I think their new architecture will be awesome. Personally, I look forward to being able to debug iOS Bluetooth apps in the simulator, and I think some of the new form factors will be pretty cool.
But I can’t help but notice the current dearth of AAA apps that are already universal.
A transition like this is a big deal; especially if you have an app with thousands of function points, as it requires a complete, top-to-bottom re-test of every one.
In the unlikely case that you won’t find issues, it will still take a long time. Also, most companies won’t bet the farm on prerelease hardware; instead, using it to solve issues. They will still need to test against the release hardware before signing off for general distribution.
Also, I have a couple of eGPUs that I use. I don’t think the new architecture plays well with them.
I’m hoping that the next gen will obviate the need for them. They are a pain.
* 3800X (105W desktop) scores 2855
* 4900H (45W mobile) scores 2707 or 95% of 3800X
* 4750U (15W mobile) scores 2596 or 91% of 3800X
I think many professionals who need new hardware will use this as the catalyst to make them move back to PC hardware. The M1 looks amazing, but I need more than just Apple software to do my work. It’ll be a while before all the things I use get migrated.
Doing it while not burning lots of Watts and being energy efficient is what Apple aims for.
And I doubt that AMD and especially Intel will offer an alternative here soon. Desktop yes, but not on mobile.
I'm using Tableau Desktop in my daily work and until it is available on this new platform - these Macs are not an option for me even if they are 10x more performant. I guess there are a lot of professionals that are constrained in a similar manner. So we will see if M1 is adopted in such scenarios at all.
This comparison looks at different segments of the fab<>manufacturer<>OEM relationship. Add the user in there and you might say that you can buy an AMD CPU for $100 but an Apple CPU will cost you $1000. Not very meaningful as a comparison.
- locked bootloader - no bootcamp - can't install or boot linux or windows
- virtualization limited to arm64 machines - no windows x86 or linux x86 virtual machines
- only 2 thunderbolt ports
- limited to 16GB RAM
- no external gpu support/drivers - can't use nvidia or amd cards
- no AAA gaming
- can't run x86 containers without finding/building for arm64 or taking huge performance hit with qemu-static
- uncertain future of macos as it continues to be locked down
I also imagine not all customers are ready to jump on ARM day 1. Some will want to wait until the software ecosystem has had time to make the transition.
Rather i suspect that the main benefit that M1 has in many real world benchmarks is that it has on-chip memory, cache-miss latency is a huge cost in the real world (why games has drifted towards DoD internals), so sidestepping that issue to a large extent by integrating memory on-die gives it a great boost.
I'm betting once they've reverse engineered the M1 perf, we will see multi-GB caches on AMD/Intel chips within 4 years.
Not on Mac they don't. macOS isn't tied to the App Store in the same way that iOS devices are, and it probably accounts for a tiny percentage of third-party Mac software sales by value.
Mark my words, this is going to be a massive shit show for people using those ecosystems, for 5 years if not 10. It already happened with the PPC transition.
One of the big hopes for Rosetta2 is the possibility of intercepting library calls and passing them to the native library where possible. So a well-behaved app using OS libraries for everything it can, and really only driving the business logic itself, would be running mostly-native with the business logic emulated/translated.
(This is hopes/dreams/speculation with no insider knowledge.)
If Windows could do the same, then letting windows-arm do the translation of windows-x86/64 binaries would allow it to leverage windows-arm libraries - so an app could be running in mostly-virt with some-emu. If we let parallels/qemu/etc do the emu, it can only ever be 100% emu.
I however cannot find anything that says differently from apple or a source showing how non signed systems can be booted on this chip.
The only thing I could find was apples statement that your system is even more secure now because non signed code won't be run.
Do you have any resources I can read so we can clear up this misunderstanding?
Or are you referencing my auto-correct error which replaced cant with can? If that is the case... I'm sorry for that but it's too late to fix and my intent is (I think) quiet clear considering I said that they're both locked and this lock is without an off switch.
macOS has plenty of warts, but my experience with high quality equipment (Thinkpad, XPS, Alienware) has left me ultimately disappointed with Windows in many day to day situations compared to Mac.
Windows is still clunky, despite many improvements. And aside from a Thinkpad Carbon X1, I haven't used any laptop with the performance and build quality (for the size/money) as a Macbook Air.
Wouldn't it be awesome if Apple's CPU/IC design segment became a separate company and sold these CPUs and maybe SoC by themselves?
I think that would make a big dent into AMD/Intel market shares. Since Apple's part/die should be quite a bit smaller than most of the x86 dies, the fab costs should also be smaller and so should the final price.
I'm not an Apple fanboy, and I'm still very displeased with many of their decisions (touchbar being #1 on MBPs). But if you consider the packaging (small, light, sturdy, now-decent keyboard), and consider their performance, and then consider macOS, I think they are more than competitive.
Even if you match every spec, including size/weight and durability, it comes down to Windows vs macOS. Ironically, macOS is free while Windows is not, but macOS is worth more (to me and many others).
This is, arguably, a disadvantage of any Mac.
But Apple Silicon may actually improve the situation over time, as having the same GPUs and APIs on Macs and iOS devices means there is now a much bigger market for game developers to target with the same codebase.
That's true of many other common goods worldwide. Unless you can buy a locally made item in a lower purchasing power country, you will usually pay a currency exchange equivalent price for the item. Actually you often pay more because the local shop selling the product cannot get bulk pricing and pass along the discount to you.
Finally, when you add the local taxes - 23% in Portugal, for example - the price can be much higher compared to Alaska, US (< 2%). That last bit is really not Apple's fault.
Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.
Lastly, I do simply see it as a bit of a false dichotomy (or whichever fallacy is more accurate) to suggest that by using a mac that can't run other operating systems, you're giving up computing freedom. If I found it necessary to have a Windows or Linux machine, I'd simply just go get something that probably has better hardware support anyway. Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.
This thing ONLY EXISTS in the first place because of Apple's continual vertical integration push, and because other parts of the business were able to massively subsidise the R&D costs necessary to come up with a competitive SOC in an established market that's otherwise a duopoly. If their CPU/IC design segment were its own company, the M1 would never have seen the light of day. Period.
Furthermore, this chip is not meant to be a retail product. It's optimised for the exact requirements that Apple's products have. The whole reason why they're able to beat Intel/AMD is because they don't have to cater to the exact same generic market that the established players do, but instead massively optimise for their exact needs.
I genuinely don't understand how can anyone who wishes to break up Apple not see that these things?
I wouldn't buy a Pro now because I would wait for the next version, but I wouldn't trade a current Pro for a new Air just for the CPU bump...
As crazy as this sounds, using the left hand side ports for charging causes the fans to kick in more often[0].
[0]https://apple.stackexchange.com/questions/363337/how-to-find...
This seems pretty grounded.
> The whole reason why they're able to beat Intel/AMD is because they don't have to cater to the exact same generic market that the established players do, but instead massively optimise for their exact needs.
I'm less convinced of this. Their exact needs seem to be making laptops... and so these chips would make interesting candidates for other laptops, if split off from Apple.
It's never going to happen, and an independent company might struggle for R&D money, but if these prove to be better laptop CPUs there is a market there.
Everything from the memory model, to the secure enclave for TouchID/FaceID, to countless other custom features, are parts that other SOCs do not need to have present on the die, and cannot optimise for.
For good or bad, this is truly a piece of engineering that could only have come out of Apple.
Their focus is not on power users ? They just completed the first, small, step of the migration to ARM. They only updated the very low-end models, those who were never targeted at power users anyway, and we're seeing that their cheapest, lowest-end models are whooping the i9 MBPro's ass.
Sure, the features and RAM may not be there yet, but again, these are the low-end models. If we're seeing this level of performance out of an MBAir or Mini. I can't wait to see what the Mac Pro is going to be capable of.
I'm starting to wonder if Apple's "faster than 98% of PC laptops" is an understatement, at least within its class.
Depends on the definition of "power user". Music producers, video editors, and iOS developers will be served quite well.
> lenovo thinkstation p340 tiny. you can configure it with 64gb ram and core i9 with 10 cores and 20 threads for less $$$ than what an underpowered 6 core mac mini is selling for.
When making that calculation, one should also take power consumption into account. $ per cycle is very low now with the new CPU.
- An 8x cdrom narrowly beats 10meg ethernet.
- 1x dvdrom narrowly beats 100meg ethernet.
- ATA133 narrowly beats 1gbit ethernet.
Original SATA is 1.5gbit, so 1Gbit ether bottlenecks us to 1999 storage speeds.
Many don't even want to pay for the MacBook Pro's Touch Bar and many will probably see an Air's fanless design as an advantage over Pro, even if its CPU is throttled a little more often in sustained high CPU workloads. Complete silence is just that good. And it's going to be so much cheaper.
I think the star of the show yesterday was definitely the MacBook Air.
Generally speaking, I’ve never found that to be genuine, but assume best intent and all as the site rules say, so here goes...
For me, I often am doing multiple things at once and juggling between unrelated tasks which actually need my attention sporadically. The LG 5K with it’s beautiful display gets my primary attention and is what I want to be focused on. Apps there are what I should ideally be working on. The two Apple TB displays then flank either side, and they get the “distractions”, but stuff important enough to allow distracting me when needed. What that is is variable from day (sometimes Slack makes the list, sometimes it doesn’t, as one example), but it’s intentionally in my peripheral vision so I only “look” for motion/changes in certain areas, not actually try to read. If I need to read, I context shift by rotating my chair slightly to the left or right (better for you than rotating head).
End of the day, do whatever works for you. Yes, there are folks who can legitimately take advantage of lots of screens like me. Some folks who have tried multiple don’t, and are happier when they switch back, but I’m not one of them and it’s something I routinely experiment with to ensure I’m still using the best “for me” setup. I’ve gone as high as nine screens attached (with eGPU) to my laptop (eGPU seems to keep laptop fans on elevated, but not full power btw, back to original thread purpose), but I found I was too easily distracted and hence am back to four. Ideally I’d like to do two 8K 32” or less monitors, but haven’t justified buying them yet.
The 16" MacBook Pro is only available with a discrete GPU, which I don't need but causes me tons of issues with heat and fan noise. The dGPU has to be enabled to run external monitors, and due to an implementation detail, the memory clocks always run at full tilt when the resolution of driven monitors doesn't match, resulting at a constant 20W power draw even while idle.
https://www.theage.com.au/technology/apple-fans-burned-by-ho...
https://www.infoq.com/news/2020/09/microsoft-windows-mac-arm...
> Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?
Apple devices are definitely not priced competitive outside first world countries.
A future MacBook with an M2 chip will be an even better buy, and software availability will be even better.
I'm not a fan of Apple's domineering business strategies, but this SoC is impressive. I have to imagine AMD and Intel will follow up with something similar (a tightly integrated SoC aimed at higher performance applications).
GB deliberately avoids running up the heat because it is focused on testing the chip, not the machine's cooling ability.
Cinebench, as you say, tests "real-world" conditions, meaning the entire machine, not just the chip.
Of course the iPhone chip isn't as beefy as the M1, but the results still speak for themselves.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
Has an interesting comparison of an iPhone 12 mini doing similar work to an i9 iMac
Now I haven't dug into the details to verify both produced the same results. I believe most of the difference is from software encoding versus hardware encoding. the follow up tweets suggest similar output.
it does show how workloads can cause people to jump to conclusions on simply one test and not having all the details to support the conclusion they desire to arrive at
Jonathan Morrison posted a video [0] comparing a 10-core Intel i9 2020 5K iMac with 64GB RAM against an iPhone 12 Mini for 10-bit H.265 HDR video exporting and the iPhone destroyed the iMac exporting the same video, to allegedly the same quality, in ~14 seconds on the iPhone vs 2 minutes on the iMac! And the phone was at ~20% battery without an external power source. Like that is some voodoo and I want to see a lot of real world data but it is pretty damn exciting.
Now whether these extreme speed ups are limited to very specific tasks (such as H.265 acceleration) or are truly general purpose remains to be seen.
If they can be general purpose with some platform specific optimisations that is still freakin' amazing and could easily be a game changer for many types of work providing there is investment into optimising the tools to best utilise Apple Silicon.
Imagine an Apple Silicon specific version of Apple's LLVM/Clang that has 5x or 10x C++ compilation speed up over Intel if there is a way to optimise to similar gains they have been able to get for H.265.
Some very interesting things come to mind and that is before we even get to the supposed battery life benefits as well. Having a laptop that runs faster than my 200+W desktop while getting 15+ hours on battery sounds insane, and perhaps it is, but this is the most excited I have been for general purpose computer performance gains in about a decade.
[0] https://www.youtube.com/watch?v=xUkDku_Qt5c
Edit:
A lot of people seem to just be picking up on my H.265 example which is fine but that was just an example for one type of work.
As this article shows the overall single-core and multi-core speeds are the real story, not just H.265 video encoding. If these numbers hold true in the real world and not just a screenshot of some benchmark numbers that is something special imho.
Generally, HW encoders offer worse quality at smaller fie sizes and are used for real-time streaming, while CPU-based ones are used in offline compression in order to achieve the best possible compression ratios.
- crappy webcam,
- no built-in SD card reader (a 1TB SD card is ~200$, and my music does not need to be stored on an expensive SSD)
- magsafe.. if this was the only downgrade, I'd upgrade, but TBH I love magsafe on my mac and I would miss it if I upgrade.
Citation Needed.
Apple detractors LOVE to bring this idea up, but there's nothing to it in any real sense. Do Macs ship with a checkbox filled in that limits software vendors? Yes. This is a good thing. Is it trivial to change this setting? Also yes.
Anyone who buys a Mac can run any software on it they like. There is no lockdown.
That is a big deal as it means Adobe, Sony, BlackMagic, etc. will be able to optimise to levels impossible to do elsewhere. If that 8x speed up scales linearly to large video projects you would have to have a Mount Everest sized reason to stick to PC.
A large body of encoding software is however, perfectly capable of taking advantage of them.
The comparison is kind of silly.
First of all, adding this hardware encoder to the Apple Silicon chips definetly has a cost, and you pay it when you buy their products.
Second, there are Intel CPUs available with hardware encoders (google Intel QuickSync). The only difference is that you can choose to not pay for it if you don't need it.
Dedicated HW for specific computing applications are nothing new, back in the 90s you had dedicated MJPEG ASICs for video editing. Of course, they became paperweights the moment people decided to switch to other codecs (although the same thing could be said for 90s CPUs given the pace of advancement back then).
Thing is, your encoding block takes up precious space on your die, and is absolutely useless for any other application like video effects, color grading, or even rendering to a non-supported codec.
Incidentally, this is philosophically the idea behind processors with built in FPGA capability. The hardware acceleration would just be a binary blob that could be loaded in and used when necessary. It could be continually updated, and provided with whatever software needed it.
In the market, I think M1 systems will not alienate Apple-app-only users (Logic, Final Cut, Xcode-for-iPhone development) and may attract some purely single-page-application users.
Mostly, Zoom call efficiency will drive its broader adoption this year among the general population. If the Air is fast, quiet, and long lasting for Zoom calls, it will crush.
I won't buy one. I have a 32GB 6-core MBP that will satisfy my iOS dev needs until M2 (and a clearer picture of the transition has developed). But I might start recommend Airs to the folks sitting around our virtual yule log this year.
This cannot be implemented in AMD's current 7nm process due to size restrictions.
The SoC-side of the story is also contrary to the very core design of a general purpose CPU. RAM, GPU, and extension cards for specialised tasks are already covered by 3rd party products on the PCIe and USB4 buses and AMD has no interest in cannibalising their GPU and console business...
With their upcoming discrete GPUs and accelerator cards, Intel might be in the same boat w.r.t. SoC design.
I disagree. Sure we have things like NVENC for accelerated H.265 encoding but that is an additional hardware expense and means only the machines you have that hardware in benefits. This will literally be all Macs from a $699 Mini and up.
I don't know enough about Intel QuickSync to compare but it clearly isn't used on the iMac in that video for some reason (perhaps the software does not support it? I don't know)
That is pretty exciting for video professionals IMHO.
I'm not saying it is world changing but being able to pick up a MacBook Air for $999 and get performance similar (or maybe better?) than a PC that costs two or three times that with dedicated hardware is very cool.
edit:
I appear to be missing why this is a ridiculous thing to say?
Could somebody please explain to me why the comparison is "silly"?
I think I would be excited if it were not built by Apple. That functionally means it’s only going to be in Apple products at a 300% markup.
what laptop are you buying where you need to purchase a Windows license?
"Rosetta is meant to ease the transition to Apple silicon, giving you time to create a universal binary for your app. It is not a substitute for creating a native version of your app.”
https://developer.apple.com/documentation/apple_silicon/abou...
Gotta take their 30% cut of everyone's revenue.
If you're only looking for computers that are comparable according to the usual hardware specs (cpu, ram, etc.), a Mac costs 25-50% more than the cheapest comparable PC.
If you also throw ergonomic factors like weight and battery life into the comparison, there's no price difference.
(This was USA prices.)
It's a useless benchmark, what I want to see is things like time to compile a bunch of different software, things that take long enough for the processor/cooling to reach thermal equilibrium etc.
I.e. stuff that more closely matches the real world
I.e. "We raised the walls on our garden further"
Balls to that, if I buy hardware I want to be able to run what I want on it or it's not a general purpose computer, it's something else.
Or if you buy a bare system or build your own, you need to buy Windows yourself.
Apple gives their OS away, but in theory you can only run it on their hardware.
Eh, that's some very biased thinking.
In the real hardware world of both mechanical and electrical. I can approach a company and say "we want something with these specs, we'll buy 10 million pieces per month, what can you do you us?" and that kicks off R&D efforts after some ground contractual agreements to commit both parties.
You know, exactly what Microsoft did with AMD when they commissioned unique SOC designs for their consoles with a host of never before implemented features such as direct gpio <-> ssd io.
All modern GPUs already have HW-accelerated encoding, including integrated Intel GPUs, and Nvidia and AMD dedicated ones.
Despite that, HW-encoding is not used that much by video professionals because CPU encoders produce better compression given enough time. You only have to do compression once, while the video will be watched who knows many times, therefore there is no real point in making your encoding faster if the file size goes up.
Your HW encoder is absolutely useless for anything else. It does not make your FX rendering faster, and cannot be used for any other codecs.
Even if say, your HW matches CPU-based encoder at first, it is fixed and cannot be updated unless you buy new HW which takes millions to design. Meanwhile any old dev can contribute to x265 and figure out new psychovisual enhancements that will enhance the quality while minimising the file size.
Specialized HW (i.e. ASICs) has been in existence for decades, yet despite that, there are very good reasons as to why we still use general-purpose CPUs (and GPUs) for most computing applications.
Yeah comparing TDP is meaningless even within the same processor. The 4 core workload in this table uses 94W and the 16 core workload uses 98W. There is also an anomaly at 5 cores where the CPU uses less power than if it only used 4 cores.
If you tried to derive conclusions about the power efficiency of the CPU you would end up making statements like "This CPU is 3-4 times more power efficient than itself"
This is one place where the 64-bit ARM ISA design shines: since all instructions are exactly 4 bytes wide and always aligned to 4 bytes, it's easy to make a very wide decoder, since there's no need to compute the instruction length and align the instruction stream before decoding.
In a majority of cases, burst performance only affects things like responsiveness, and those things should be measured instead for a better reflection of the benefits.
I thought Google, Microsoft, Nvidia, etc. were all pushing streaming gaming services that will run on any hardware with a decent internet connection. I would imagine the hardware video decoder in the M1 chip would allow 4K streaming video pretty well.
Let's back up a second: Tim Cook said this transition would take place over two years. This is just the first batch of computers running Apple Silicon.
I certainly hope and think that Apple can come out with a beefy 16 inch MacBook Pro with 32 gigs of ram within the next two years. Also, in that time I imagine everything in Homebrew would be ported over natively.
> Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.
This could easily devolve into a "to Mac or not" type of discussion which I don't want delve into, but I've personally never used a Mac (I have tried it) and I don't feel like I'm missing out because of it. Certainly the freedom to run any software and not be beholden to a large corporate interest is more important to me.
> Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.
Yes, precedent, but also increased market share if they were to become more popular. One day, an alternative might not exist if we do not vote financially early enough. Therefore, my immediate urge is to say: no, I do not want to participate in this scheme. Make your hardware open or I will not buy it.
You're also going to be in a bind if Apple decides they don't care about the long tail and stops supporting emulation before all of your plugins have been converted (if they ever are).
When my current Mac dies, that's where I'm headed, but running Linux; Microsoft is less of a danger, so I don't outright boycott anymore, but I still find Windows super annoying to use.
Imagine going on a hike and climbing an exponential slope like 2^x. You go up to 2^4 and then go down again and repeat this three times so you have hiked 12km (43) in total. Then there is a athlete who is going up to 2^8. He says he has hiked 8km and you laugh at him because of how sweaty he is despite having walked a shorter distance than you. In reality 32^4 (48) is nowhere near 2^8 (256). The athlete put in a lot more effort than you.
The x64 options from Apple are also uncompetitive with existing PCs already because they're using Intel processors when AMD's are faster.
There's no record of Jobs being anti-vax (your comment is already the fifth Google search result for "Steve Jobs anti-vax", and the top four are nothing to do with him being anti-vax.
As for "eschewed almost all forms of modern medicine": completely false. He delayed surgery for his cancer - which was of a form that was known to be slow-growing and not especially lethal - for just 9 months, then decided to have conventional surgery. This is not "eschew[ing] almost all forms of modern medicine". It's just delaying for a relatively short period.
He later spoke of his regret about this delay to his biographer, so others would be warned against doing the same thing [1].
Even still, he lived for another 8 years after the diagnosis and was in good health for most of it.
Some experts dispute that his approach likely caused his death, and even suggest it might have extended his life [2].
[1] https://www.forbes.com/sites/alicegwalton/2011/10/24/steve-j...
[2] https://www.livescience.com/16551-steve-jobs-alternative-med...
https://browser.geekbench.com/v5/cpu/search?q=Macmini9%2C1
https://browser.geekbench.com/v5/cpu/search?q=MacBookPro17%2...
https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...
it looks like they're all in the same ballpark (i.e. the Air is not leading others, just comparable).
edit: I've tried both sides of the laptop, I have iStat Menus and keep an eye on temps, etc.
edit2: they "only" spin to 3.5k-4k at idle, but go up as soon as I do anything with Chrome or am on a video call, which is most of my job
It's really only intended to be one of many benchmarks to tell the whole story; of course Linus would attack that because it doesn't make any real sense in his use and isn't the full story for him. If Geekbench was not tested, it would not cover the majority of computing uses and it would weigh cpus that had poor turbo or burst performance unfairly high for most uses.
Geekbench is kinda like 0-60MPH times and other tests (like SPEC2006) are like top speed I guess? The whole story is between them.
Unfortunately, although applications like that exist, they're not the common case.
I guess there will still be issues for people who need to run VMs or media apps like Adobe CC etc, and also it will take a while for some dev environments to be fully supported (https://github.com/Homebrew/brew/issues/7857 for example shows it will take some time to get to feature parity).
Overall though a lot of the hard work has already been done, and I'm sure in 2 years time or whenever the transition is 'complete', mac owners will be getting much more value for money with few drawbacks (the main one being higher walls around the garden)
Now, the thing is: intel's AVX512 instructions are supposed to accelerate this sort of work, but in practice they are getting lapped by the T2 chip. That signals that apple's ability to tune hardware designs to the needs of content creators is greater than intel's.
The benchmark is just ... that accurate.
But consider:
1. Apple just introduced their “starter” chip for everyday consumers. They would not WANT that chip to smoke their top-end Intel Macs, as it would cannibalize margins until the more powerful chips are ready.
2. This is a “two year transition,” with a team that has been shipping a meaningfully better chip every 12 months for about a decade.
From those two observations, I would expect that we’ll see an M1x or M2 chip in the next 12 months which nudges the 16”MBP to 20% or so better than what we’re seeing today, and 12 months after that the transition is completed with the introduction of the M3 series, where there’s an M3, M3x, and M3z for the low, mid, and max performance machines.
And when that happens, I expect the max-performance M3z is going to smoke anything else on the market.
This is not the first time Apple has had a “five-year lead” on the industry, and I wouldn’t be surprised if it takes Intel and AMD some time to catch up.
And frankly we should all be thrilled about that, because more competition and innovation is just going to accelerate our entire field. I can’t wait to see all the new stuff this will power over the next decade :)
Apple is already doing quite well in the low-end education market with the base model iPad. These are competitive with Chromebooks on price. They also do a better job of replacing paper with Notability or GoodNotes and open up project opportunities with the video camera. Most kids seem to be fine with the on-screen keyboard, but that part is not ideal without an external keyboard/keyboard case.
I just ordered the new silicon MacBook Pro and I fully expect to record a full album on it using Logic X. As presumably Apple rewrote it from the ground up for the silicone, I expect to be absolutely blown away.
Hence why in that multi-core result the 4c Intel is way closer to the 8c M1 than it "should" be.
There is an exception for apps with JIT and those will perform poorly (think Chrome and every Electron app).
I'm probably not the first or last to suggest this but... it seems awfully tempting to say: why can't we throw away the concept of maintaining binary comparability yet and target some level of "internal" ISA directly (if intel/AMD could provide such an interface in parallel to the high level ISA)... with the accepted cost of knowing that ISA will change in not necessarily forward compatible ways between CPU revisions.
From the user's perspective we'd either end up with more complex binary distribution, or needing to compile for your own CPU FOSS style when you want to escape the performance limitations of x86.
I wonder what the world is going to be like when companies own entire stack including all hardware (even things like cameras and displays) and applications (including app stores).
There is going to be no competition as any new player would have to first join an existing stack that keeps tight grip and ensures competition is killed off before gaining momentum.
So, basically, dystopian future with whole world divided into company territories.
Excited because x86 has been stagnant for years. I'm not an EE but my understanding is that its a pretty messy architecture with lots of "glue and duct-tape" fixes.
Nervous because, although x86 had flaws, since pretty much everything ran on it allowed for more open environments and development practices & fewer walled gardens.
> If you still own any Intel stock it is probably good time to dump it.
The market already has the information you have, so it is unlikely that your evaluation of Intel's expected future profits is significantly better/more insightful that others'.
Until Intel stops making huge amounts of money, I'm not sure, as a company, they're in huge trouble. Apple isn't going to be selling the M1 to other companies and other companies have proven they don't have the mettle to spend what Apple is spending to make chips like this. Really, AMD should be a little worried since they have some counterparty risk getting all of their chips made by TSMC. At least Intel controls their own production.
Nonetheless, the translated code is going to be slower than ordinary native code because a lot of the information compilers use for optimization isn't available in the resulting binary, so the translator has to be extremely conservative in its assumptions.
Source:
[1]: https://browser.geekbench.com/processors/amd-ryzen-7-3700x
[2]: https://browser.geekbench.com/processors/amd-ryzen-7-3700u
They are not binning for how high the cores will clock, which is just how business is done with Intel and AMD.
My guess: in geekbench air and pro score the same, because geekbench is shortlived and not thermally constrained. In cinebench you'll see the pro pulling ahead.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
Chip fab is probably a short term risk but not a medium long term one, as the situation with Huawei has made it very clear to world governments that having competitive chip fab capabilities is incredibly important from a geopolitical power point of view. I would expect to see significant investment by various governments in the next ten years into addressing it.
(If you have a proven skillset in that area and no particular ties to the country you live in I would look into buying a yacht.)
By buying stock you say: I believe the company is going to be better than the market thinks it is going to be.
For example, if someone thought M1 was thermally constrained, they might decide to rip mini out of the case and attach a different cooling method.
this is what some of the intricate systems related to technological progress and markets we're dealing with converge on, yes.
In these terms, I'd rather look at this with a different perspective: the platforms are becoming more mature.
Think of it as an organism; there is nothing natural about outsourcing the control flow of your own components. My intuition for natural design would be something along the lines of microservices (like our biological organs), organically defining the greater entity they themselves form (the company).
Apple is just one illustration of systems getting out of control. Think of international politics, or edgecases in financial markets. Our systems didn't escape us, because they were never truly under control.
Therefore, this isn't dystopian. It is dystopian from a reference frame within wich you claim to have control, but you don't, and you never had. To exercize any control over technological and economic progress is _way_ beyond our individual scope, by a margin of at least one level of abstraction.
Instead, this is a transition from one system (one with decentralized and autonomous components (subcontractors et al)) to the next (one with organic components).
Within our current game, that feels dystopian, but that won't matter, because it won't be the same game anymore. Iteratively, that is.
Buyers don't especially care about performance either to be honest, unless they care about one of those factors in order to need it.
source: https://finance.yahoo.com/news/intel-now-ordering-chips-tsmc...
Not really. The business models for desktop gaming are completely different to mobile devices, and there is no meaningful common market.
I think people will actually be surprised at how few games from iOS will even run on an ARM Mac because developers will block them.
It used to be possible to do some gaming on a Mac - the vast, vast majority of Steam users have graphics hardware of a level that was perfectly achievable on a Mac, especially with an eGPU. The end of x86 is the end of that market, forever.
They don't have desktop UIs, and will be a big step down for most users. You can't seriously argue the UI doesn't matter on a Mac.
It's the same on these phone chips, sure the encode is much quicker, but it's not a fair comparison because you have much more control over the process using software. We'll have to wait and see how the quality and file size on the M1 encoder stacks up.
Back then, Intel was still betting on Itanium. It was a time when AMD was ahead of Intel. Wintel lasted longer, and its only since the smartphone revolution they got caught up. In hindsight, even a Windows computer on Intel gave a user more freedom than the locked down stuff on say iOS. OTOH, sometimes user freedom is a bad thing, arguably if the user isn't technically inclined or if you can sell a locked down platform like PlayStation or Xbox for relatively cheap (kind of like the printer business).
I'm sure other people can add to this as well. :-)
Intel can still do as many mistakes as they feel like it, and AMD better hold on to their game console deals.
For travelling, I don't think anything beats a Macbook due to how light, thin, and resilient they are. But my 2016 MBP is a pretty shit machine for its price. It's also loud (like every other laptop I've had). I avoid using it. Sure, if you take size/design/mechanical quality into account, it is probably unmatched. But for 95% of my computer usage, those are irrelevant, as I just sit at my desk. I had a company provided HP laptop (not sure if stock or upgraded by our IT staff) at my previous job which was far more performant than my Macbook, so I don't really agree that Windows laptops are necessarily bad, but it was even louder than the Macbook, and of course clunky and ugly.
For me personally, the new Macbooks are disqualified as viable work machines if it's really true that you can't use more than 1 external screen. That's just not a viable computer for me (for work). I will always have a Macbook though just because of how much I love them for travel. But a Macbook is more of a toy than a serious computer, especially if the 1 screen limit is true.
Renoir is 7nm Zen 2 aka the 4000 series. https://en.wikichip.org/wiki/amd/cores/renoir
Matisse is also 7nm Zen 2 aka the desktop 3000 series. https://en.wikichip.org/wiki/Matisse
Picasso is 12nm Zen+ aka the mobile 3000 series. https://en.wikichip.org/wiki/amd/cores/picasso
As for bios (well EFI these days) that should be handled very seamlessly via fwupd on all major Linux distros: https://fwupd.org/lvfs/devices/
(Frankly seems much more robust to how it is handled on Windows - not at oll or via half broken OEM bloatware.)
In this view, it's entirely possible that the Air simply did not have time to throttle before the benchmark ran out.
Exactly. So it was never really the hardware that held back gaming on Mac, but the fact that from a game-development perspective it's an esoteric platform that has limited or no support for the main industry standard APIs (DirectX, Vulkan, etc).
It was never worth the effort for most game developers to bother porting games to the Mac because writing a custom port for Metal was way too expensive to justify for such a niche market.
But now with Apple Silicon, that all changes. If you're going to port your game to iOS (and that's surely tempting - it's a huge platform/market with powerful GPUs and a "spendy" customer base) then you basically get Mac for free with close to zero additional effort.
2) You have to rebuild the UI, which costs money which the Mac version may well not recoup.
3) You have a different version for desktops that costs more upfront with less reliance on in-app mechanics that you don't want to undermine.
OK, but that's no different to Windows and Android.
> "You have to rebuild the UI"
No. Even with apps this is no longer the case (see: "Mac Catalyst"), but it's certainly not true for games. Maybe you'd need to add some keyboard/mouse bindings, but that's about it. Even iPads support keyboards and mice now days!
That said, my wife returned the macbook air she bought 3 weeks ago in favor of this new one, so I'll be able to test on that machine before I dive in.
FWIW, I originally thought your mention of Azul was a typo, so I parsed your comment as "Azure and Microsoft" before I realized the tautology, which was why I posted the question. I didn't realize that Azul had pivoted to be a software-based vendor of the JVM.
Hardware encoded H.264 and H.265 don't have any visual quality differences when encoded at the same bitrates in the same containers, as far as I'm aware. Could you list a source for this?
Have never heard of any client or production house requesting an encoding method specifically. Although granted I work at the low end of commercial shooting.
When I said there is no magic, I was warning that we shouldn't expect huge speedups or a crushing advantage, at least not for long. The edge M1 has is due to a simpler ISA (which is less demanding to run efficiently, freeing more resources for optimization and execution) and a faster memory interface (which makes an L3 miss less of a punishment). This fast memory interface also limits it to, for now, 16GB of memory. If the dataset has 17GB, it'll suffer. Another difference is that all of the i9 cores are designed to be fast, whereas only 4 cores of the M1 are. This added flexibility can be put to good use by moving CPU-bound processes to the big cores and IO-bound and low-priority ones to the little ones.
In the end, they are very different chips (in design and TDP). It'd be interesting to compare them with actual measurements, as well as newer Intel ones.
Exactly that, I think that's the ultimate reason to have a laptop and if not, it might make sense to re-think the setup. Why should I buy a 1500$ Intel/AMD mobile workhorse when the battery is empty after 2 hours? It usually makes more sense to have a server at home or VPS for that. Also a lot of native Apps like Steam have first-class support for that nowadays. For the rest Parsec might work.
There is a social experiment about that, running since at least 2007. It's the smartphone and the tablet. I think I don't have to detail it and all of us can assess the benefits and the problems. We could have different views though.
By the way, I wonder if the makers of smartphones hardware and/or software could do all of their work, including the creation of new generations of devices, using the closed systems they sell (rent?). I bet they couldn't, not all of their work, but it's a honest question.
Final Cut Pro is one of the options for the new Macbook, on the item's page.
That's exactly what I said. It's faster, but not an order of magnitude faster and different workloads will perform differently depending on a multitude of factors (even if benchmarks don't). Do not expect it to outperform a not-too-old top-of-the-line mobile CPU by a large margin.
Rolls-Royce’s cars are designed for use as service cars, in corporate/government motor-pools. They’re essentially the ultimate Uber car. Rich individuals are actively discouraged[1] from buying them to drive themselves. (They’re actually kind of crap for driving yourself in!)
If a rich individual owns an RR model, it’s always because 1. they have retained the services of a professional chauffeur, and 2. the chauffeur has requisitioned one, to use to serve their client’s needs better.
Ask any ridesharing-service driver who worries about customer-experience — the ones that deck out the back with TVs and the like — what they wish they were driving.
A few example features these cars have:
• a silent and smooth ride allowing for meetings or teleconferences to occur in the back seat (this “feature” is actually achieved through many different implementation-level features; it’s not just the suspension. For example, they overbuy on engines — or rather, overbuild[2] — and then RPM-limit them, so that the engine never redlines, so that it’ll never make noise. They make the car heavy on purpose, so that the client won’t even feel speedbumps. Etc.)
• A set of automated rear seat controls... in the front. You know who’s coming, you set the car up the way they’re expecting, quickly and efficiently. This includes separate light and temperature “zones”, in case you have a pair of clients with mutually-exclusive needs. Yes, you can deploy a rear side window-shade from the driver’s seat (presumably in response to your client saying they have a migraine or a hangover.)
• An umbrella that deploys from the driver’s door. This is there for the chauffeur, so they can get out first, have an umbrella snap into their hand, and then use it to shield their client from the rain as they open the client’s door.
• A sliding+tinting sound-isolation window between the front and back, controlled by the client in the back; but then an intercom which the front can use to communicate to the back despite the isolation window — but only one-way (i.e. the front cannot hear the back through the intercom.) Clients can thus trust that their chauffeur is unable to listen into their private conversations if they have the isolation window up.
• A lot of field repair equipment in the boot. These cars even have a specific (pre-populated!) slot for spare sparkplugs; plus a full set of hand-tools required to get at the consumables under the hood. The chauffeur or their maintenance person is supposed to populate this stuff when it’s been used; such that the driver is never caught without this stuff in the field; such that—at least for most problems the car might encounter—the car will never be stalled on the side of the road for more than a few minutes.
Etc etc. These cars (from which most “limousines” are cargo-cutting the look, without copying the features) are built from the ground up to offer features for chauffeurs to use to serve client needs; rather than to offer features clients use to serve their own needs.
Which is why these cars are expensive. They’re really not luxury items (as can be seen by the fact that they retain most of their value in the secondary market), but rather:
1. it’s just expensive to build a car this way, because these use-cases, and the parts they require, are somewhat unique;
2. the people who buy these cars — who are by-and-large not individuals, but rather are businesses/governments with a motorpool component — are willing to pay more to get something that can be used at sustained load for decades with low downtime and high maintainability; to serve many different clients with varying needs, changing configuration quickly and efficiently; and to offer a smooth and reliable set of amenities to said clients. In other words, motorpools buy these Rolls-Royce limos instead of forcing regular sedans into that role, for the same reason IT departments buy servers instead of forcing regular PCs into that role.
—————
[1] RR did build their Wraith model so they could actually have something to offer these people who wanted a Rolls-Royce car to drive themselves. But it’s really kind of a silly “collector’s model” — most people in the market for a luxury coupe wouldn’t bother with it. It’s just a halo product for gearhead collectors with RR brand loyalty.
[2] Rolls-Royce Motors, the car maker, is actually owned by their engine manufacturer, Rolls-Royce plc. RR plc exists to engineer and build engines and turbines for these sort of server-like high-reliability low-downtime SLAed use-cases, as in planes, rockets, power plants, etc. RR plc went into the car business for the same reason Tesla did: as a testbed and funding source for their powertrain technologies.
“ fun fact: retaining and releasing an NSObject takes ~30 nanoseconds on current gen Intel, and ~6.5 nanoseconds on an M1”
https://mobile.twitter.com/hhariri/status/132678854650246349...
“…and ~14 nanoseconds on an M1 emulating an Intel”
And the problem with the M1 isn’t performance, single core is already off the charts. The M2 is going to provide 32Gb and 64Gb systems with up to four thunderbolt/USB4 ports and support for dual 6K monitors.
Of course, Apple as an OEM does not support running non-Mac OSes, so virtualization should still be preferred for most use cases.
X86 code translated by Rosetta2 on the M1 retains/releases objects TWICE as fast as native x86 processors.
https://mobile.twitter.com/Catfish_Man/status/13262387851813...
https://mobile.twitter.com/Catfish_Man/status/13262387851813...
Also notice this result is using clang9 while the MacBook results are using clang12. I assume clang12 has more and better optimizations.
Apple released a killer low end SOC in the M1. It contains the highest performance single core processor in the world along with high end multi core performance. But it’s limited to 16Gb and two USB4/Thunderbolt ports, so it’s targeted at the low end.
When the M2 is released mid next year, it will be even faster, support four USB4/Thunderbolt ports and will also come in 32Gb and 64Gb versions.
Greatness takes a small wait sometimes.
Server farms are going to switch rapidly, one leading Mini server farm just announced a 600 unit starter order, and the CEO noted that Big Sur also made significant changes to licensing to make its use in server farms easier.
Zen 3 slightly outperforms the iPhone chip, but it runs it's clocks slower to stay inside a 5 watt power draw.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...
So, yes. Expect it to outperform Tiger Lake and Zen 3, at least on a per core basis.
I understand that this may be because PC touchpad hardware reports jitter, sometimes higher than it really is, and this causes the Precision Touchpad software to increase the hysteresis. Macbook touchpads have low jitter and the driver is tuned to benefit from it.
If anyone Microsoft with input into the Precision Touchpad reads this, why don't you fix it or work with your licensees to fix it?
You can be a professional, without being a professional user of all your tools. In fact, for any such tool, there's probably only one or two professions that are professional users of that tool specifically (i.e. where that is the tool that constrains their productivity.)
Many professions aren't constrained by any tools, but rather are constrained by human thinking speed, or human mental capacity for conceptual complexity. These people aren't "professional users" of any tools. They're just regular users of those tools.
So, to sum up — when a tool is described as being "for professionals", what that means is that the tool serves the needs of people who are members of a profession whose productivity is constrained by the quality of that tool. It doesn't mean that it's for anyone who has a profession. Just people who have those professions. They know who they are. They're the people who were frustrated by the tool they have now, and for whom seeing the new tool elicits a joy of the release of that frustration. An "ah, finally, I can get on with my work without [tool] getting in my way so much."
-----
Programming is a profession that is most of the time constrained by thinking speed. (Although, some of the time, we're constrained by grokking speed, which is affected by the quality of the tools known as programming languages, and sometimes the tools known as IDE code-navigation.)
Very little time in a programmer's life is spent waiting for a build to happen, with literally no other productive tasks that they could be doing while they wait.
(Someone whose role comes down solely to QA testing, on the other hand, tends to be a professional user of CI build servers. Faster CI server? More productive QA.)
You can argue that this particular Ryzen has a higher gross margin, say 50%, and lower ASP than $300, but that only gets your cost down to what, $140? And with RAM costing extra.
The iMacs are a mistery to me, but guess I'm not the target market anyway. (I have a 2018 MBP)
We still can't emulate some 20-year-old machines at full speed on modern hardware due to platform impedance mismatches. Rosetta2 may be good, but until someone runs a DAW on there with a pile of plug-ins and shows a significant performance gain over contemporary Intels (and zero unexpected dropouts), I'm not buying the story of Rosetta2 amazingness.
Just because binary translation is used doesn't mean it's magically as fast as native code. Converting code that runs on architecture A to run on architecture B always has corner cases where things end up a lot slower by necessity.
"Geekbench 5 is a cross-platform benchmark that measures your system's performance with the press of a button. How will your mobile device or desktop computer perform when push comes to crunch? How will it compare to the newest devices on the market? Find out today with Geekbench 5"
https://news.ycombinator.com/item?id=25065026&p=2
It's not even a contest or similarly powerful, spend $3000 on an AMD + Nvidia PC and its significantly more powerful than the $5000 Mac Pro in both CPU and GPU compute.
https://www.macworld.com/article/3572624/tsmc-details-its-fu...
https://mobile.twitter.com/hhariri/status/132678854650246349...
"Make it thicker and heavier" is apparently not the answer that Apple was looking for from Intel.
Thin and light Wintel PCs are known for having a lot of thermal issues too.
Edit: And Apple has already discussed how Rosetta2 handles complexities like self modifying code. It probably won’t help with performance but the M1 has a lot of power to keep even that code running fast.
But more importantly video/audio apps aren’t going to be using Rosetta2 for very long. 99% of code written for x86 MacOS is going to be a simple recompile to native, if not more. Not going native when your competitors did and got 2-5x faster is corporate suicide.
There will be a long tail of edge case software that runs in emulation, but that won't affect the majority of users.
(Of course, power savings are important in their own right for mobile / battery-operated use cases.)
Apple chips with more cores will come in time as well.
It's the per core performance, especially at a given power draw, that matters going forward.
The reason these sorts of improvements are possible is because most of the power of a video encoder doesn't come from the container format but rather how effectively the encoder selects data to keep and discard in keeping with that container format. There is also a LOT of tuning that can be done depending on the kind of content that is being encoded, etc.
For high end work basically nobody uses hardware encoders on the final product.
You also have the problem with proprietary software that even if a port exists, it's not the version you have an existing license for, and you may not be able to afford a new laptop and all new software at the same time.
This is an early access build from today https://github.com/microsoft/openjdk-aarch64/releases/tag/16...
Where I can be wrong is that Apple could release two chips. First an upgraded M1, let’s call it M1x that supports a bit more on chip ram (24 or 32 Gb) and four ports. It would be only for high end MacBook Pros and again optimized for battery life.
And they would release a M1d for desktops that has more cores, but moves RAM off chip. That would improve multicore performance, but I don’t know how it much it would hurt single core with slower memory fetches. Probably they could compensate with higher clock speeds, power budgets, and more active cooling.
Single thread: 1015 Multi thread: 7508
M1 is far ahead of the 3.2Ghz Xeon in my machine single threaded and I suppose having double the cores(vs M1 Hi-po cores) helped me in that respect. The fact that I paid 5 grand vs a ~1500 dollar portable is not lost on me here...
They had crappy code signing policies (only store apps on Windows RT tablet) which guaranteed poor adoption but that was a policy decision, not a technical one.
Personally I never much liked VAX myself, but that was primarily because my first experience of it was with VMS, and I'd previously used Unix. The difference was jarring.
Later in my career, I had no choice but to use VMS on an Alpha cluster, and grew to really appreciate it.
This seems like a really cool piece of technology, and I'm kind of bummed that everyone is so cynical and pessimistic about everything these days (albeit understandably so).
https://centerforhealthjournalism.org/blogs/2011/11/10/what-...
Let alone multicore performance. Apple's core are also far behind in IO, 64GB of RAM and 4x Thunderbolt is less than what current gen laptop chips can do.
Low RAM is still an issue with such fast SSDs, as someone who ran RAID0 Gen3 NVMe SSDs (so equivalent to what's in there).
* 3800XT = 1357 (100%)
* 4800H = 1094 (~80%)
* 4800U = 1033 (~76%)
I would expect a 5800U to score at best around 1500, but realistically closer to 1300-1450. That's still behind the M1, but pretty darn close for being behind a node (and will still probably be faster for applications that would require x86 translation).
I understand you are being sarcastic, but no, that's not what's not what I'm saying.
It is Apple Silicon that is faster (at least on paper). I'm saying I think even though AMD will have worse perf/watt, I think it will get impressively close despite it's less efficient fabrication process.
Genshin Impact is a great game that is on iOS in addition to "real consoles". Oceanhorn 2 is an amazing game that was originally on Apple Arcade and brought to Nintendo's "real console".
There is also quite a number of ports that I think you aren't aware of.
The M1 is a system on a chip, with all the benefits and drawbacks of that including RAM and port limits.
The next releases will likely be A) a tweaked M1 for higher end PowerBooks with more RAM and ports and B) a desktop version with plenty of ports, significantly higher clock speeds, and off chip RAM.
I think there will always be faster CPUs out there, but not remotely near the M series in power per watt, and cost per power.
It's like calling yourself a programmer because you can set a timer on your VCR. (dated but still accurate)
If you read my parent comment you'll see how DAWs are going to be using Rosetta2 for years to come, maybe even a decade, for many people. Even if there are ARM versions, you won't be able to use them until all your dozens if not hundreds of plug-ins, some of which won't be actively developed any more or will require a re-purchase for an ARM version, have also migrated.
People invested in such ecosystems aren't just going to up and give up half their collection of software, or spend thousands re-purchasing upgrades to get ARM versions.
You can't run Windows on these things, and Rosetta 2 doesn't fully support kexts, VMs, or certain instruction sets. It's a translator and it's going to be imperfect in practice. That's why it's not intended to supplant development with native instructions.
Your other comment is a tweet regarding one function that is speculatively faster, but tells me nothing about real-world performance -- nor whether the tools I use for my business are going to be supported by Apple Silicon in the next few months.
Although instead of lasting 1 year they only last 7 days, but there is no fee for a user to sign and install their own binaries.
Most importantly, Zen 4 is a chiplet design, so for the same amount of cores it will be cheaper to make than the M1 chip.
As for performance per watt, Renoir in low power configurations matches the A12. I would really doubt that a laptop Zen 4 on 5nm LPP wouldn't pass the M1/M2 in both performance and performance per watt, because Renoir is on 7nm with an older uArch and gets close.
To clarify iOS, so the app erases itself after 7 days? Or is it something like you can install an app for only 7 days after downloading/using Xcode?
The only issue might be multi-touch based games on M1
Wouldn't it be easier to just create standard pagination links at the top of the comments page, rather than manually forcing a mod to post this "list of pages" comment on each and every article?
From what I gather AWS also offers AMD and ARM-based instances.
I don't see how it benefits the cloud providers not to offer these choices. No one benefits from a monopoly and they want to drive the cost of their services down.
I don't care that I can't run Linux on my Mac. If I wanted to run Linux, I'd have different hardware.
1) there is a more link at the bottom of the page, Dan started making these comments because the UI is too subtle and a lot of people miss the link
2) I believe that he’s said they are working on performance improvements so that all the comments can be shown on a single page to solve the problem completely.
I am not at all an expert in this field so my remarks are based on my own observations and performance benchmarks from experts in the industry. For example, this graph from Anandtech shows the stagnation (or at least slow improvements) in x86 performance gains while performance gains from Apple have been massive.
The real prize is getting the software fast enough to just render entire pages again, which is what it did for most of HN's existence.
MacBooks are due for a new industrial design, so it makes sense to wait. If you absolutely need a new computer, that's a tricky place to be right now.
It's just that... swapping data in and out of ram is far slower than clock speed, right? So if the bottleneck is RAM, would it make sense to get the most RAM? (ie: Intel Mac)
Apple makes it clearer that in the real world, these machines are only going to offer their incredible performance on Metal, iPad/iPhone apps and for any Mac apps that happen to have been ported over to M1 by their developers (using Xcode). These machines will only offer similar performance to existing Intel Macs when running existing Intel Mac apps because the incredible performance will be reserved for Apple's Rosetta2 software to make those unmodified apps compatible.
But what went unsaid, except during the part where they say they 'learned from their experience in the past processor transitions', is that by introducing the chip at the low-end of the lineup first, they create a market for the (few remaining relevant) Mac developers to invest in porting their code over to ARM and likewise, because these new machines run iPad apps at full speed on upto 6K displays, there is incentive for the iPad/iOS-only devs to expand the functionality beyond what their wares can do on a tablet/phone. (Any Mac dev that drags their feet porting may find that there are 50 iPad apps that now run fullscreen performing 75% of their functionality, costing them sales in the big volume accounts where they buy licenses by the thousands.) Meanwhile, the type of users who can get by with two USB ports, 16GB of RAM and a single external monitor probably don't run many third-party Mac apps and are going to have an awesome experience with the iPad apps and Apple's native apps.
But the truth is comparing to future offerings is bullshit, and we have to stick to what's available today. Impressive power/performance and all that, I have to say. We will see how sustained load looks like and how it runs non-optimized software. But to put in perspective 1 CCX of zen 3 performs better on 7nm (but draws up to 65W). With approximately the same die size (although w/o GPU and other things, the M1 has).
A lot of people run around with way more powerful laptops than they actually need for whatever they are doing, because it's through a business or it's deductible, but news flash, buying a Macbook Pro doesn't make you a pro.
A question, if ALL pros were fine with 16GBs of RAM, why does Apple offer 4x as much? Answer, because a lot of people will actually need it.
I am happy for people who will get these new devices an be happy with it, I might get one too. But truth be told, most of us getting these devices could make it work with the latest iPad Pro + Magic Keyboard just as well. (OK, I do need to code occasionally, but even for that there is pretty ok apps for iPad I could use)
The expectations have to come the fuck down from where they are today, because the expectations put on these devices are just crazy. It's so overhyped that I think many will be disappointed, when compatibility issues surface and when people realise that the 3x, 5x, 7x performance digits are mainly down to Fixed Function Hardware and Accelerators and general performance increase is just slightly above the generational leap we are used to, with a bigger increase in efficiency.
I think it's more that gaming wasn't held back on the Mac. It was just bootcamp was much more common than people think.
> If you're going to port your game to iOS (and why not? It's a huge platform with powerful GPUs and a huge, "spendy" market)
Because mobile gaming and desktop gaming have very little in common. Note that Nintendo didn't port their titles when they released iOS games, they made new games. Users want different experiences, and flat ports of successful console gaming titles to iOS tend to fail. There are, all told, very few ports of successful PC/console games to iOS, and those that exist tend to be brand reuse rather than literal ports.
> then you basically get Mac for free with close to zero additional effort.
Not even remotely. The way you secure your service has to be totally different, the UI paradigm is completely different, you have to cope with totally different aspect ratios etc etc. It's significant effort, and it will be very hard to justify for most game studios. It's certainly more work in most cases than porting a Windows game to MacOS was when using a mainstream engine, and that was not a huge market.
The BSA/SBS is relatively new as far as I'm aware. The server version was released in 2014, the same year as the iPhone 6 which was already using Apple SOCs.
I don't know when the client version was released but fairly recently AFAIK. I don't know of any systems shipping based on it.
Most ARM systems are using device trees and their own custom slate of devices.
So I should amend my comment I suppose: no one is using any kind of "Standard ARM PC" definition in any quantity, and I'm not sure we should bring over UEFI or ACPI when device trees have been working well so far.
Nevertheless as I noted I'm sure enterprising hackers will figure out how to do it. If you downgrade security the SEP will sign whatever "kernel blob" you like and the system will load and jump to it at boot. Technically that isn't even required - a kext could pause all CPUs, set the correct registers, and jump to a completely different OS if you were really determined.
This has been a claim made about the Macs since the T2 chip came out. It was strictly false then (you just had to boot into Recovery Mode and turn off the requirement that OSes had to be signed by Apple to boot) and we still don't know for sure now. Apple has stated in their WWDC that they're still using SecureBoot, so it's likely that we can again just turn off Apple signature requirements in Recovery Mode and boot into ARM distros.
Whether or not that experience will be good is another thing entirely, and I wouldn't be surprised if Apple made it a bitch and a half for driver devs to make the experience usable at all.
>- virtualization limited to arm64 machines - no windows x86 or linux x86 virtual machines
True, but this isn't a strictly unsolvable limitation of AS and more like one of those teething pains you have to deal with, as it is the first-generation chip in an ISA shift. By this logic, you could say that make doesn't even work yet. Give it some time. In a few months I expect all of these quirks to be ironed out. Although, I suppose if you're concerned about containers it sounds like you want to be in the server market, not the laptop market.
>- only 2 thunderbolt ports, limited to 16GB RAM, no external gpu support/drivers, can't use nvidia or amd cards, can't run x86 containers without finding/building for arm64 or taking huge performance hit with qemu-static
See above about "give it some time".
>- no AAA gaming
I mean, if you're concerned about gaming, you shouldn't buy any Mac at all. Nor should you be in the laptop market, really. Although, this being said, the GPU in the new M1 is strong enough to be noted. In the Verge's benchmarks, Shadow of the Tomb Raider was running on the M1 MacBook Air at 38FPS at 1920x1200. Yes, it was at very low settings, but regardless – this is a playable framerate of a modern triple-A game, in a completely fanless ultrabook ... running through a JIT instruction set translation layer.
>- uncertain future of macos as it continues to be locked down
I disagree. I know we were talking about the M1 specifically, but Apple has shown that the future of ARM on desktop doesn't have to be as dismal as Windows made it out to be. Teething pains aside, the reported battery life and thermal performance on the new AS machines have been absurdly fantastic. I think, going down the road, we'll stop seeing x86 CPUs on all energy-minded machines like laptops entirely.