←back to thread

1080 points antipaul | 1 comments | | HN request time: 0.206s | source
Show context
satysin ◴[] No.25069364[source]
This is very interesting and in line with Apple's claims. I am looking forward to some real world numbers for different tasks in the next few weeks and months as native apps become available.

Jonathan Morrison posted a video [0] comparing a 10-core Intel i9 2020 5K iMac with 64GB RAM against an iPhone 12 Mini for 10-bit H.265 HDR video exporting and the iPhone destroyed the iMac exporting the same video, to allegedly the same quality, in ~14 seconds on the iPhone vs 2 minutes on the iMac! And the phone was at ~20% battery without an external power source. Like that is some voodoo and I want to see a lot of real world data but it is pretty damn exciting.

Now whether these extreme speed ups are limited to very specific tasks (such as H.265 acceleration) or are truly general purpose remains to be seen.

If they can be general purpose with some platform specific optimisations that is still freakin' amazing and could easily be a game changer for many types of work providing there is investment into optimising the tools to best utilise Apple Silicon.

Imagine an Apple Silicon specific version of Apple's LLVM/Clang that has 5x or 10x C++ compilation speed up over Intel if there is a way to optimise to similar gains they have been able to get for H.265.

Some very interesting things come to mind and that is before we even get to the supposed battery life benefits as well. Having a laptop that runs faster than my 200+W desktop while getting 15+ hours on battery sounds insane, and perhaps it is, but this is the most excited I have been for general purpose computer performance gains in about a decade.

[0] https://www.youtube.com/watch?v=xUkDku_Qt5c

Edit:

A lot of people seem to just be picking up on my H.265 example which is fine but that was just an example for one type of work.

As this article shows the overall single-core and multi-core speeds are the real story, not just H.265 video encoding. If these numbers hold true in the real world and not just a screenshot of some benchmark numbers that is something special imho.

replies(2): >>25069400 #>>25069658 #
joefourier ◴[] No.25069400[source]
Your h265 example is due to the iPhone having a dedicated HW encoder while the iMac was rendering using the CPU. A hardware video encoder is almost always going to be faster and more power efficient than a CPU-based one by definition. However, a CPU encoder offers more flexibility and the possibility of being continually improved to offer better compression ratios.

Generally, HW encoders offer worse quality at smaller fie sizes and are used for real-time streaming, while CPU-based ones are used in offline compression in order to achieve the best possible compression ratios.

replies(3): >>25069422 #>>25069459 #>>25070290 #
gjsman-1000 ◴[] No.25069422[source]
OK... but let's say I'm a professional. That's a big sell. Having dedicated HW to do something faster than CPU is a bonus, not cheating.
replies(3): >>25069443 #>>25069451 #>>25069557 #
1. joefourier ◴[] No.25069451[source]
You already have HW encoder blocks on certain CPUs and most GPUs. See: Intel Quicksync, Nvidia NVENC and AMD Video Core Next. Support for them will of course depend on your platform and the applications you are using. IIRC, video editing software will generally use HW decoding for smooth-real time playback, but use CPU-encoding for the final output.