Most active commenters
  • p_l(3)

←back to thread

Bought myself an Ampere Altra system

(marcin.juszkiewicz.com.pl)
204 points pabs3 | 26 comments | | HN request time: 0.925s | source | bottom
1. maz1b ◴[] No.44420099[source]
I've always wondered why there isn't a bigger market / offering for dedicated servers with Ampere at their heart (apart from Hetzner)

If anyone knows of any, let me know!

replies(3): >>44420113 #>>44420905 #>>44421529 #
2. ozgrakkurt ◴[] No.44420113[source]
A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.
replies(5): >>44420185 #>>44420236 #>>44420304 #>>44421290 #>>44421491 #
3. zxexz ◴[] No.44420185[source]
I don’t think a lot of companies realize they are using it. At three companies now, I’ve witnessed core microservices migrate to ARM seamlessly, due to engineering being direct pressure to “reduce cloud spend”. The terrifying (and amazing) bit is that moving to ARM was enough to get finance off engineering’s back in all cases.
4. adev_ ◴[] No.44420236[source]
> A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.

That is just not true.

Nowadays, most OSS software and most server side software will run without any hinch on armv8.

A tremendous amount of work has been done to speed up common software on armv8, partially due to popularity of mobile as a platform but also and to the emergence of ARM servers (Graviton / Neoverses) in the major Cloud providers infrastructure.

replies(2): >>44420803 #>>44425743 #
5. Someone ◴[] No.44420304[source]
If you use AWS, lots of software can easily be run on Graviton, and lots of companies do that.

https://www.theregister.com/2023/08/08/amazon_arm_servers/:

“Bernstein's report estimates that Graviton represented about 20 percent of AWS CPU instances by mid-2022“

And that’s three years ago. Graviton instances are cheaper than (more or less) equivalent x86 ones on AWS, so I think it’s a safe bet that number has gone up since.

replies(1): >>44420583 #
6. baq ◴[] No.44420583{3}[source]
yeah if you're running a node backend, the changes are cosmetic at best (unless you're running chrome to generate pdfs or whatever). easiest 20% saved ever. if I were Intel or AMD I would be very afraid of this... years ago.
replies(1): >>44421650 #
7. p_l ◴[] No.44420803{3}[source]
However, it's hard to get into ARM other than using cloud offerings.

Because those cloud offerings have handled for you the problematic case of ARM generally operating as "closed platform" even when everything is open source.

On a PC server, usually you only hit any issues if you want to play with something more exotic on either software or hardware. Bog-standard linux setup is trivial to integrate.

On ARM, even though finally there's UEFI available, I recall that even few years ago there were issues with things like interrupt controller support - and that kind of reputation persists and definitely makes it harder to percolate on-prem ARM.

It also does not help that you need to go for pretty pricy systems to avoid vendor lock-in at firmware compatibility level - or had to, until quite recently.

replies(1): >>44421081 #
8. moffkalast ◴[] No.44420905[source]
They're slow and the arch is less compatible? Arm cores in web hosting are typically known as the shit-tier.

I think the main use case for these is some sort of Android build farm, as a CI/CD pipeline with testing of different OS versions and general app building, since they don't have to emulate arm.

replies(1): >>44421237 #
9. rixed ◴[] No.44421081{4}[source]
Why is it hard to get a mac or a pi?
replies(3): >>44421400 #>>44421938 #>>44422629 #
10. dijit ◴[] No.44421237[source]
Well, I've run some Ampere Altra ARM machines in my studio so I can speak to this;

A) No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).

B) Ampere Altra runs faster for throughput than x86 on the same lithography and clock frequency; I can't imagine how they'd be slower for web, it's not my experience with these machines under test. Maybe virtualisation has issues (I ran bare-metal containers - as you should).

My original intent was to use these machines as build/test clusters for our go microservices (and I'd run ARM on GCP) but GCP was a bit too slow to roll out and now we're far into feature locking any migrations of that.

So I added the machines to the general pool of compute and they run bots, internal webservices etc; with Kubernetes.

The performance is extremely good, only limited by the fact we can't use them as build machines for the game due to the architecture difference - however for storage or heavy compute they really outperform the EPYC Milan machines which are also on a 7nm lithography.

replies(1): >>44421526 #
11. M0r13n ◴[] No.44421290[source]
I am running an ARM64 build of Ubuntu on my MacBook Air using Multipass. I've never had a problem due to missing support/optimisation for ARM - at least I didn't notice any. I even noticed that build times were faster on this virtualised machine than they were natively on my previous Tuxedo laptop which had an Intel i7 that was a couple of years old. Although, I blame this speed mostly on the sheer horsepower of the newest Apple chips
12. p_l ◴[] No.44421400{5}[source]
Pi is relatively underpowered (quite underpowered, even), has proprietary boot system, and similarly isn't exactly good with things you might want in professional server (there are some boards using compute modules that provide it as add on, but it's generally not a given). Also, I/O starved.

Mac is similarly an issue of proprietary system with no BMC support. Running one in a rack is always going to be at least partially half-baked solution. Additionally, you're heavily limited in OS support (for all that I love what Asahi has done, it does not mean you can install let's say RHEL on it, even in virtual machine - because M-series chips do not support 64kB page size which became the standard on ARM64 installs in the cloud, for example RHEL defaults to it and it was quite a pain to deal with in a company using Macbooks).

So you end up "shopping" for something that actually matches server hardware and it gets expensive and sometimes non-trivial, because ARM server market was (probably still is) not quite friendly to casually buying a rackmount server with ARM CPUs for affordable prices. Hyperscalers have completely different setups where they can easily tank the complexity costs because they can bother with customized hardware all the way to custom ASICs that provide management, I/O, and hypervisor boot and control path (like AWS Nitro).

One option is to find a VAR that actually sells ARM servers and not just appliances that happen to use ARM inside, but that's honestly a level of complexity (and pricing) above what many smaller companies want.

So if you're on a budget it's either cloud(-ish) solutions or maybe one your engineers can be spared to spend considerable amount of time to build a server from parts that will resemble something production quality.

replies(1): >>44439816 #
13. ◴[] No.44421491[source]
14. zozbot234 ◴[] No.44421526{3}[source]
> No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).

Does qemu-user solve that, or are there special requirements due to JIT and the like that qemu-user can't support?

replies(1): >>44424566 #
15. jychang ◴[] No.44421529[source]
Oracle OCI?

Their Ampere A1 free tier is pretty good. 4 core ARM and 24gb ram webserver for free.

replies(1): >>44421607 #
16. maz1b ◴[] No.44421607[source]
Are there any alternatives to a big cloud provider or Hetzner?
replies(1): >>44426367 #
17. imtringued ◴[] No.44421650{4}[source]
I was scared of ARM taking over in 2017 (e.g. Windows being locked down to just the Microsoft store) and 8 years later literally nothing happened.
replies(2): >>44423938 #>>44425505 #
18. delfinom ◴[] No.44421938{5}[source]
Have you ever tried to run a Mac "professionally" as a role of a server?

It's absolute garbage. You can't even run daemons in the last few years on the mac without having an user actually log into the mac on boot. And that's just the beginning of how bad they are.

And don't get me wrong, I'm not shitting on Macs here, but Apple does not intend for them to be used as servers in the slightest.

19. haerwu ◴[] No.44422629{5}[source]
Apple Mac you can buy now will have M3 or M4 cpu. While Asahi team supports only M1 and M2 families.

So you cannot run Linux natively on currently-in-store Mac hardware.

And raspberry/pi is a toy. Without any good support in mainline Linux.

20. yjftsjthsd-h ◴[] No.44423938{5}[source]
I would not call Windows RT "literally nothing". It failed, but it clearly was an attempt to lock things down.
21. dijit ◴[] No.44424566{4}[source]
I'm not sure I'm comfortable buying hardware for build systems that has to emulate an instruction set to build.

Doubly-so when that hardware has a native instruction set originally to the target.

22. p_ing ◴[] No.44425505{5}[source]
Windows S exists today.
23. arp242 ◴[] No.44425743{3}[source]
Most of the times I've looked at it, ARM still seems to be lagging behind x86 a bit, but the gap is much smaller than it used to be. For example: https://news.ycombinator.com/item?id=41925983

Of course the specifics will depend on what you're doing: for certain applications or code-paths ARM may well have 100% parity, or perhaps even have more optimisations than x86.

24. everfrustrated ◴[] No.44426367{3}[source]
PhoenixNAP has them by the hour or month

https://phoenixnap.com/bare-metal-cloud/instances#arm

25. rixed ◴[] No.44439816{6}[source]
PI is not that much unpowered by the dollar, is it?

I think Google demonstrated 20 years ago that server-grade hardware is no match for fault tolerance in software. Plenty of build farms use PIs, running standard flawless arm64 Linux distros.

replies(1): >>44453880 #
26. p_l ◴[] No.44453880{7}[source]
RPi does not support everything you may need to use, build farms are not the only use case for ARM servers too.

And Google could do what they did because of the scale they bought crap changed the calculus for performance.

And even Google switched to denser compute over time, with custom hw even.