I used it extensively in the late 90's early 00's and really liked it. As a newb sysadmin at the time, the built-in versioning on the fs saved me from more than one self-inflicted fsck up.
I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.
I was just a lowly kid programmer working on a side project, so I can't tell you whether it's still uniquely good at something to justify its usage today. It worked. But it was weird and arcane (not that Unix isn't, but Unix won) and using it today for a new project would come with a lot of friction.
I'm curious about running a VMS system although the admin side looks a bit daunting. The thing I'd really like to do is run X-Windows on an emulator on my home lab, just to see it run.
No, there is no reason to do a greenfield VMS deployment and hasn't been for a long time.
> I've heard its reliability is legendary, but I've never tried it myself.
I've heard the same things but I am doubtful as to their veracity in a modern context. Those claims sound like they come from an era where VMS was still a cutting-edge and competitive product. I'm sure VMS on vaxclusters had impressive reliability in the 1980s, but I doubt it's anything special today. If you look at the companies and institutions that need performance and high reliability today (e.g. Hyperscaler companies or the TOP500) they are all using the same thing: Linux on clusters of x86-64 machines.
The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.
VMS is dead... and buried, deep.
It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.
It's interesting in a "what if/parallel universe" kind of way, but I certainly wouldn't touch it for anything new with that licensing.
HP tried to kill it. Somewhere in the neighborhood of 10 years ago they announced the EOL. This company - VMS Software Inc (VSI) was formed specifically to buy the rights and maintain/port it. So you have an interesting situation.
Old VAX and Alpha systems are supported, supposedly indefinitely, but if you have an Itanium system it has to be newer than a certain age. HP didn’t sell the rights to support the older Itaniums, and no longer issues licenses for them. So there is a VMS hardware age gap. Really old is ok. Really new is ok.
MCP Release 21 came out in mid-2023, and release 22 is supposed to be out middle of this year, with further releases planned: https://www.unisys.com/siteassets/microsites/clearpath-futur...
Looking at new features, they seem to be mainly around security (code signing, post quantum crypto) and improved support for running in cloud environments (with the physical mainframe CPU replaced by a software emulator)
Unisys’ other mainframe platform, OS 2200 is still around too, and seems to follow a similar release schedule - https://www.unisys.com/siteassets/microsites/clearpath-futur... - although I get the impression there are more MCP sites remaining than OS 2200 sites?
VMS(and the hardware it runs on) takes the opposite approach. Keep everything alive forever, even with hardware failures.
So the VMS machines of the day had dual redundant everything, including interconnected memory across machines and SCSI interconnects and everything you could think of was redundant.
VMS clusters could be configured in a hot/hot standby situation, where 2 identical cabinets full of redundant hardware could failover during an instruction and keep going. You can't do that with the modern approach. The documentation was an entire wall of office bookcase almost clear full of books. There was a lot of documentation.
These days, nothing is redundant inside the box level usually, we instead duplicate the boxes and make them cheap sheep, a dime a dozen.
Which approach is better? That's a great question. I'm not aware of any academic exercises on the topic.
All that said, most people don't need decade long uptimes. Even the big clouds don't bother with trying to get to decade long uptimes, as they regularly have outages.
On one hand, I don't see many of the modern services having years to decades of uptime. Clustering is also bolted onto many products while not available for most products. These were normal for OpenVMS deployments. Seems like a safer bet in that regard.
If people have $$$, which VMS requires for such goals, they can hire the type of sysadmins and programmers who can do the same in Nix' systems. The number of components matching VMS's prior advantages increases annually. Also, these are often open source with corresponding advantages for maintenance and extensions.
The other thing I notice is VMS systems appear to be used in constrained ways compared to how cloud companies use Linux. It might be more reliable because users stay on the happy path. Linux apps keep taking risks to innovate. FreeBSD is a nice compromise for people wanting more stability or reliability with commodity hardware.
Then, you have operating systems whose designs far exceed VMS in architectural reliability. INTEGRITY RTOS, QNX, and LynxOS-178B come to mind. People willing to do custom, proprietary systems are safer building on those.
Also, I noted in those two roadmaps that they offered continuity - Clear Path Forward -> "Don't worry about migrating or refactoring your apps", but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".
I don't know if this is just a standard legal cop-out
I know the Michigan state government uses Unisys MCP (I don’t know for what): https://www.michigan.gov/-/media/Project/Websites/dtmb/Procu...
In 2023, NY State Education Department had an RFP to build a replacement for their Unisys MCP-based grants admin system with a modern non-mainframe solution (don’t know current status of that project): https://www.nysed.gov/sites/default/files/programs/funding-o...
It is generally easier to find out who government users are because they are often required to publish contracts with the mainframe vendor, RFPs for replacement systems or services, etc. (Exception is some national security users where the existence of the system and/or the tech stack it runs on may be classified.) By contrast, private companies, that kind of info is usually only available under NDA - obscure legacy systems is the kind of “dirty laundry” a lot of business don’t want publicly aired
In 2013, it was reported in the media that the Australian retailer Coogans was one of the last (maybe the last?) Unisys mainframe sites in Australia - https://www.smh.com.au/technology/tassie-retailer-rejects-cl... - I don’t know if they kept their mainframe after that or got rid of it, but in 2019 they went out of business - https://www.abc.net.au/news/2019-03-12/hobart-retailer-cooga...
> but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".
> I don't know if this is just a standard legal cop-out
I’m pretty sure that’s just the “standard legal cop-out” - lots of vendors put language like that in their roadmaps, to make it harder for customers to sue them if delivery is delayed or if the planned next version ends up being cancelled
The daughter board in that machine could have RAM or CPUs in the same slot and it was changeable without reboots!
With cloud computing reliability is achieved through software, distributed software which needs to be horizontal.
On a mainframe reliability is achieved through hardware (at least as fast as user software is concerned), and the software is vertical.
If you need to run vertical, single-system image software, the cloud is useless for making it reliable.
Systems built on the cloud are reliable only insofar as people can write reliable distributed systems which assume components will fail. It is not reliable if you can't, or don't want to write software like that (which carries a significant engineering cost).
The real reason to avoid mainframes (and VMS) is vendor lock-in, not technological.
This is not entirely the case.
I have been writing about VMS for years. The first x86-64 edition, version 9, was released in 2020:
https://www.theregister.com/2022/05/10/openvms_92/
Version 9.0 was essentially a test. 9.1 in 2021 was another test and v9.2 in 2022 was production-ready.
There's no new Itanium or Alpha hardware, and version 8.x runs on nothing else. Presumably v9.x is selling well enough to keep the company alive because it's been shipping new versions for a while now.
Totally new greenfield deployments? Probably few. But new installs of the new version, surely, yes, because VMS 9 doesn't run on any legacy kit, so these must be new deployments.
It's been growing for a few years. Maybe not growing much but a major new version and multiple point releases means somebody is buying it and deploying it. Never mind no new deployments in a decade... more new deployments in the last few years than in the previous decade.
Version 9.x has been out for 5 years, stable for 3, and primarily targets and supports hypervisors. It knows about and directly supports VMware, Hyper-V and KVM.
So, yes, get a generic x86-64 box, bung one of the big 3 hypervisors on it, and bang, you are ready to run VMS 9.
It's in active development. They're putting out new versions and selling licenses.
There are much deader OSes out there than VMS, such as Netware.
I suspect that there are more fresh deployments than there are of Xinuos's catalogue: OpenServer 5, 6, and UnixWare 7.
https://www.xinuos.com/products/
Last updated 2018...
VMS' key feature over Unix is consistency and beyond that it is being interrupt driven at all levels (no wasted cycled polling except for code ported over using POSIX interfaces). VMS was killed by a confluence of business issues, not because it was obsolete or inefficient.
I can also speak from personal experience that it was just that side of Micro Focus that was uninterested in making any sales. The AMC (cobol compilers) division was great and I'm happy they ended up at Rocket after the OpenText merger.
They are ridiculously expensive. Their use-cases in modern compute is a rounding error towards zero. We just don't build computers like that anymore, for good reason. Memory and CPU rarely fail, and when they do fail, they fail the entire box and just replace it. In 99.99% of all cases it's cheaper and easier to do it that way.
There are vanishingly small use-cases where it makes sense to do hotplug CPU/Memory. They charge accordingly.
Like I said in my parent comment, virtually nobody needs uptimes measured in literal decades. If you are in the .01%(rounded up) of compute that actually needs that, the chances of needing to do it with x86 is even smaller.
One example are the VISA and Mastercard payment processing platforms. The way they are designed requires 24/7 literal decades of uptime. When they have partial outages, they make international headlines and end up writing letters like this: https://www.parliament.uk/globalassets/documents/commons-com...
They are completely different mental models and ways of thinking about the problem of reliability and uptime.
VMS(and IBM/360 and other large compute systems) will almost certainly give you stronger uptime guarantees than any modern compute stack, but almost nobody needs uptimes measured in literal decades.
The Hyperscaler/TOP500 computing needs are not optimized for reliability in the same ways OpenVMS does.