←back to thread

1080 points antipaul | 1 comments | | HN request time: 0s | source
Show context
satysin ◴[] No.25069364[source]
This is very interesting and in line with Apple's claims. I am looking forward to some real world numbers for different tasks in the next few weeks and months as native apps become available.

Jonathan Morrison posted a video [0] comparing a 10-core Intel i9 2020 5K iMac with 64GB RAM against an iPhone 12 Mini for 10-bit H.265 HDR video exporting and the iPhone destroyed the iMac exporting the same video, to allegedly the same quality, in ~14 seconds on the iPhone vs 2 minutes on the iMac! And the phone was at ~20% battery without an external power source. Like that is some voodoo and I want to see a lot of real world data but it is pretty damn exciting.

Now whether these extreme speed ups are limited to very specific tasks (such as H.265 acceleration) or are truly general purpose remains to be seen.

If they can be general purpose with some platform specific optimisations that is still freakin' amazing and could easily be a game changer for many types of work providing there is investment into optimising the tools to best utilise Apple Silicon.

Imagine an Apple Silicon specific version of Apple's LLVM/Clang that has 5x or 10x C++ compilation speed up over Intel if there is a way to optimise to similar gains they have been able to get for H.265.

Some very interesting things come to mind and that is before we even get to the supposed battery life benefits as well. Having a laptop that runs faster than my 200+W desktop while getting 15+ hours on battery sounds insane, and perhaps it is, but this is the most excited I have been for general purpose computer performance gains in about a decade.

[0] https://www.youtube.com/watch?v=xUkDku_Qt5c

Edit:

A lot of people seem to just be picking up on my H.265 example which is fine but that was just an example for one type of work.

As this article shows the overall single-core and multi-core speeds are the real story, not just H.265 video encoding. If these numbers hold true in the real world and not just a screenshot of some benchmark numbers that is something special imho.

replies(2): >>25069400 #>>25069658 #
joefourier ◴[] No.25069400[source]
Your h265 example is due to the iPhone having a dedicated HW encoder while the iMac was rendering using the CPU. A hardware video encoder is almost always going to be faster and more power efficient than a CPU-based one by definition. However, a CPU encoder offers more flexibility and the possibility of being continually improved to offer better compression ratios.

Generally, HW encoders offer worse quality at smaller fie sizes and are used for real-time streaming, while CPU-based ones are used in offline compression in order to achieve the best possible compression ratios.

replies(3): >>25069422 #>>25069459 #>>25070290 #
satysin ◴[] No.25069459[source]
Yes but that is kind of the point. Going forward all Apple Silicon machines will have this kind of hardware baked into the SoC at no extra cost whereas no Intel system (be it PC or Mac) will.

That is a big deal as it means Adobe, Sony, BlackMagic, etc. will be able to optimise to levels impossible to do elsewhere. If that 8x speed up scales linearly to large video projects you would have to have a Mount Everest sized reason to stick to PC.

replies(5): >>25069510 #>>25069516 #>>25069531 #>>25069585 #>>25069686 #
1. joefourier ◴[] No.25069531{3}[source]
As said below, you already have that HW in many Intel CPUs and all AMD/Nvidia GPUs.

Dedicated HW for specific computing applications are nothing new, back in the 90s you had dedicated MJPEG ASICs for video editing. Of course, they became paperweights the moment people decided to switch to other codecs (although the same thing could be said for 90s CPUs given the pace of advancement back then).

Thing is, your encoding block takes up precious space on your die, and is absolutely useless for any other application like video effects, color grading, or even rendering to a non-supported codec.