And deal with the ire of Professor Foo (and grad students Bar and Baz) who want/need to use obscure software XYZ and have done so for years (or decades) without any fuss. And build some interface between your HPC clusters and Github. And keeping with regulations and agreements on privacy and security. And so on and so forth. All that for the low, low cost of... Well, certainly not less than keeping and maintaining like one to three unix machines that don't need no fancy hardware or other special attention in the data center you are maintaining anyway?! Why?
edit: By the way, from their documentation, the department mentioned by the author runs their own E-Mail-Servers (as many universities do - fortunately, in this world, there often still is a bit more choice than 'use Gmail/Outlook in the browser').
If you work on a smaller cluster in a research institution where in silico work represents only a small portion of the research output the management of the cluster will sometimes be subcontracted out to the general IT support shop. There an administrator - usually with not nearly enough experience will start receiving support requests from users with decades of unix experience which take hours of research to solve. Unable or unwilling (and because inactivity will look bad in the next meeting of department heads) the technician will start working on some "security" matter (so it sounds urgent and important). And this is how elimination of login nodes, cutting internet access to compute nodes, elimination of switches in the offices because they pose security risks (one might be able to connect an additional devices to the network) and implementation of 2FA on pubkey only login servers come in to existence.
Most of the cluster operators are wonderful. But a bad one can make a cluster significantly less useful.