←back to thread

174 points nicosalm | 1 comments | | HN request time: 0.202s | source
Show context
alkh ◴[] No.41907921[source]
I swear to God that all of these CS labs at different unis look the same. I am getting flashbacks of labs in Toronto that looked exactly like pictures in the post
replies(3): >>41908068 #>>41909767 #>>41915016 #
whimsicalism ◴[] No.41908068[source]
even the physics labs i worked in looked like this
replies(1): >>41908590 #
chaboud ◴[] No.41908590[source]
The physics computer lab in Chamberlin Hall at UW in the 90's was a secret treasure trove of idle NeXTstation Turbo machines in an almost always empty room cooled to near refrigeration temperatures. I used to light up at least half of that room to run distributed simulations. There's probably still a 30 year old key to that lab in a junk drawer somewhere.

Eventually I realized that it just made sense to suck it up and get my own hardware, as it was either going to be esoteric "workstation" hardware with a fifth of the horsepower of a Pentium 75 or it was going to be in a room like the UPL jammed with CRT's and the smell of warm Josta.

How do students operate these days? Unless one is interacting with hardware, I'd be very tempted to stay in "fits on a laptop" space or slide to "screw it, cloud instances" scale. Anyone with contact in the last 5 years have a sense of how labs are being used now?

replies(5): >>41908630 #>>41909129 #>>41909137 #>>41909166 #>>41909593 #
1. hansvm ◴[] No.41909593[source]
It's been nearly a decade now, but we shared a machine with 128 newish physical cores, a terabyte of RAM, and a lot of fast disk. Anyone with a big job just coordinated with the 1-2 other people who might need it at that level and left 10% of the RAM and disk for everyone else (OS scheduling handled the CPU sharing, though we rarely had real conflicts).

It's firmly in "not a laptop" scale, and for anything that fit it was much faster than all the modern cloud garbage.

The other lab I was in around that time just collected machines indefinitely and allocated subsets of them for a few months at a time (the usual amount of time a heavily optimized program would take to finish in that field) to any Ph.D. with a reasonable project. They all used the same in-house software for job management and whatnot, with nice abstractions (as nice as you can get in C) for distributed half-sparse half-dense half-whatever linear algebra. You again only had to share between a few people, and a few hundred decent machines per person was solidly better than whatever you could do in the cloud for the same grant money.