Cornell had one of very few IBM 3090s with a vector unit (to compete with the Cray) just before I showed up, but when I did IBM had donated a message-passing based supercomputer based on the Power PC architecture. I only saw a 3090 (no vector unit) at New Hampshire Insurance which I got to use as a Computer Explorer.
(2) I was taught in grad school in the 1990s to use floats if at all possible to reduce the memory requirements of scientific codes if not actually speed up the computation. (In the 1990s floats were twice as fast as doubles on most architectures but not the x86). I really enjoyed taking a course on numerics from Saul Teukolsky, what stood out in the class as opposed my reading to the Numerical Recipes book which he was a co-author of, was the part about the numerical stability of discretizing and integrating partial differential equations. If you did it wrong, unphysical artifacts of the discretization would wreck your calculation. Depending on how you did things rounding errors can be made better or worse, Foreman Action's Numerical Methods that Work and later Real Computing Made Real reveal techniques for managing these errors that let you accomplish a lot with floats and some would point out that going to doubles doesn't win you that much slack to do things wrong.