←back to thread

169 points rbanffy | 1 comments | | HN request time: 0s | source
Show context
noworld ◴[] No.43620370[source]
The successor IBM Mainframes are still alive... for the time being.

https://www.redbooks.ibm.com/redbooks/pdfs/sg248329.pdf

replies(2): >>43620494 #>>43623210 #
froh ◴[] No.43620494[source]
oh, they'll stay around for another while.

they also moved on three more CPU generations since that redbook, to z17.

I think it's Linux on Z that makes it sexy and keeps it young, in addition to a number of crazy features, like a hypervisor that can share CPUs between tenants, and a hardware that support live migration of running processes between sites (via fibre optic interconnect) and the option to hot swap any parts on a running machine.

It's doing a number of things in hardware and hypervisor that need lots of brain power to emulate on commodity hardware.

_and_ it's designed for throughput, from grounds up.

Depending on your workload there may be very good economical reasons to consider a mainframe instead of a number of rack-frames.

replies(7): >>43620580 #>>43620589 #>>43620617 #>>43620927 #>>43621478 #>>43621799 #>>43623708 #
speed_spread ◴[] No.43621478[source]
A mainframe is the biggest single system image you can get commercially. It's the easiest, most reliable way to scale a classical transactional workload.
replies(1): >>43622147 #
rbanffy ◴[] No.43622147{3}[source]
> A mainframe is the biggest single system image you can get commercially

It depends. As we have seen the other day, HPE has a machine with more than 1024 logical cores, and they have machines available to order that can grow up to 16 sockets and 960 cores on a single image of up to 32TB of RAM. Their Superdome Flex goes up to 896 cores and 48TB of RAM.

I believe IBM's POWER line also has machines with more memory and more processing power, but, of course, that's not the whole story with mainframes. You count CPUs that run application code, but there are loads of other computers in there doing a lot of heavy-lifting so that the CPUs can keep running application code at 100% capacity with zero performance impact.

> It's the easiest, most reliable way to scale a classical transactional workload.

And that's where they really excel. Nobody is going to buy a z17 to do weather models or AI training.

replies(1): >>43629824 #
1. ryao ◴[] No.43629824{4}[source]
SGI’s UV 2000 could scale to 4096 logical cores and 64TB of RAM around 13 years ago. The current 4096 core limit in Linux is from that.