Notice a cloud VM or container probably doesn't work here. You need something with a permanent presence, and shared between users (with separate Unix accounts).
"I believe the reason people run IDE backends on our login servers is because they have their code on our fileservers, in their (NFS-mounted) home directories. And in turn I suspect people put the code there partly because they're going to run the code on either or both of our SLURM cluster or the general compute servers."
Bioinformaticians (among others) in (for example) University Medical Centers won’t get much more bang for the buck than on a well managed Slurm cluster (ie with GPU and Fat nodes etc to distinguish between compute loads). You buy the machines, they are utilized close to 100% over their life time.
And deal with the ire of Professor Foo (and grad students Bar and Baz) who want/need to use obscure software XYZ and have done so for years (or decades) without any fuss. And build some interface between your HPC clusters and Github. And keeping with regulations and agreements on privacy and security. And so on and so forth. All that for the low, low cost of... Well, certainly not less than keeping and maintaining like one to three unix machines that don't need no fancy hardware or other special attention in the data center you are maintaining anyway?! Why?
edit: By the way, from their documentation, the department mentioned by the author runs their own E-Mail-Servers (as many universities do - fortunately, in this world, there often still is a bit more choice than 'use Gmail/Outlook in the browser').
If you work on a smaller cluster in a research institution where in silico work represents only a small portion of the research output the management of the cluster will sometimes be subcontracted out to the general IT support shop. There an administrator - usually with not nearly enough experience will start receiving support requests from users with decades of unix experience which take hours of research to solve. Unable or unwilling (and because inactivity will look bad in the next meeting of department heads) the technician will start working on some "security" matter (so it sounds urgent and important). And this is how elimination of login nodes, cutting internet access to compute nodes, elimination of switches in the offices because they pose security risks (one might be able to connect an additional devices to the network) and implementation of 2FA on pubkey only login servers come in to existence.
Most of the cluster operators are wonderful. But a bad one can make a cluster significantly less useful.
The author proposed the thought experiment. Ask him, not me.
* https://developer.lsst.io/usdf/batch.html
* https://epcc.ed.ac.uk/hpc-services/somerville
But they're all over the place, from the James Hutton Institute to London Imperial.
* https://www.cropdiversity.ac.uk
* https://www.imperial.ac.uk/computing/people/csg/guides/hpcom...
https://www.public.outband.net
Probably one of the stupider things I have thrown together, But I had fun making it.
Clients are all irssi on WSL2 or Macs.
Graphical data exploration and stats with R, python, etc is a beautiful challenge at that scale.
Now if all your AWS accounts are only public facing then yes it can get a bit more complicated .
One time X forwarding Matlab saved me a hefty sum of money (for a student) as I could complete an assignment remotely.
Our admins urged people to nice their processes, but my overthewire password cracking sessions were always killed no matter how nice they were.
When I started working, our PC were thin clients for all practical purposes, we had development servers where everyone logged in to work, via telnet or remote X session.
Never ever did I heard about login server term as it is being described here.
[0] https://servicedesk.surf.nl/wiki/spaces/WIKI/pages/30660184/...
And then we aren't even talking about the EPD servers that are amortized in 4 years and can easily become a compute node in the cluster for another 6 (only problem are the bookkeepers who just can't live with post-amortized hardware! What a world!!)
Of course, the organization will pay for some of those eventually, so it's not fully fair to not roll them into the IT costs, but there are also lots of ways that non-profits also don't pay those costs at the same levels that the cloud providers do either due to differences in overall costs, or in providing a lesser level of capability. (As a quick example, cloud providers need extensive physical security for their datacenters. A hospital server needs a locked door, and can leverage the existing hospital security team for free.)
Cloud is great if your need is elastic, or if you have time-sensitive revenue dependent on your calculations. In non-profit research environments, that is often not the case. Users have compute they want done "eventually", but they don't really care if it's done in 1 hour or 4 hours; they have lots of other good work to do while waiting for the compute to run in their background.
I wound up having a script for users on a jump host that would submit an sbatch job that ran sshd as the user on a random high level port and stored the port in the output. The output was available over NFS so the script parsed the port number and displayed the connection info to the user.
The user could then run a vscode server over ssh within the bounds of CPU/memory/time limits.
I had a co-worker describe it as a giant Linux playground.
Another as ETL nirvana.