←back to thread

752 points dceddia | 1 comments | | HN request time: 0.21s | source
Show context
Joeri ◴[] No.36448665[source]
600 mhz is quite the machine for NT. I remember running NT4 on a 233 mhz pentium II with 128 MB RAM and everything felt instant and limitless.

Windows 2000 was quite the hog compared to NT4 and all it added that I had a use for was USB support. I think by that point Dave Cutler was no longer running the show and windows performance slowly started degrading.

replies(3): >>36448757 #>>36448879 #>>36448927 #
1. masswerk ◴[] No.36448927[source]
I think, the most important factor isn't so much the CPU, but loading binaries from disk. And that (I/O) improved quite massively over the 1990s. (And anything written for spinning disks will start nearly instantly using solid state storage.)

To illustrate the CPU/disk-access ratio: There's a reason for scripting languages becoming prevalent for web backends in the 1990s. Loading a script source from disk and precompiling it on the fly was still faster than loading a much bigger binary from disk – which had to be done on each CGI request. (E.g., with Perl, you could have your normal script, but you could also produce an executable binary from a core. But nobody did the latter for the exact reason.)