That's what happened.
I was watching Halt and Catch Fire and in the first season the engineering team makes a great effort to meet something called the "Doherty Threshold" to keep the responsiveness of the machine so the user doesn't get frustrated and lose interest. I guess that is lost to time!
Has anyone else noticed how bad sign-on redirect flows have gotten in the past ~5 years?
It used to be you clicked sign in, and then you were redirected to a login page. Now I typically see my browser go through 4+ redirects, stuck at a white screen for 10-60 seconds.
I'm a systems C++ developer and I know nothing about webdev. Can someone _please_ fill me in on what's going on here and how every single website has this new slowness?
Also they added desktop compositing animations.
Opening a command prompt or mspaint is not exactly demanding and opens instantly on modern computers too when all desktop compositing animations are turned off.
Nothing will change my mind about this, ever. It's been downhill since then.
Windows is especially bad at this due to so much legacy reliance, which is also kind of why people still bother with Windows. Not to claim that Linux or MacOS don't have similar problems (ahem, Catalyst) but it's not as overt.
A lot of the blame gets placed on easy to see things like an Electron app, but really the problem is so substantial that even native apps perform slower, use more resources, and aren't doing a whole lot more than they used to. Windows Terminal is a great example of this.
Combine this with the fact that most teams aren't given the space to actually maintain (because maintaining doesn't result in direct profits), and you've got a winning combination!
It is not the main thing going on in this twitter post, but it does show a way modern computers feel slower than older machines.
[1]: https://winmerge.org
Windows NT 3.51 minimum hardware requirements were a i386 or i486 processor at 25MHz or better and 12MB of RAM for the workstation version. So the 600MHz machine with 128MB RAM is exceeding the minimum requirement by (conservatively) 24x in CPU speed and 10x in RAM, along with all the architectural improvements from going from the i386 to what's presumably a Pentium III-class machine.
If that's actually a Surface Go 2 running Windows 11 - well, it doesn't have a quad-core i5 as the tweet claims - the Surface Go 2 came with a Pentium Gold or a Core m3; both with only two cores and of those is an ultra-low power variant.
As such, that exactly meets the minimum CPU specification for Windows 11 and only doubles the minimum 4GB RAM requirement.
I'm not trying to apologize for the difference here, but it's not an entirely like-for-like comparison.
So while all the redirects are annoying, they are probably better than all the hand rolled auth that failed in various ways.
WinNT 3.51 was released in 1995 - the fastest PC in 1995 was either a Pentium or Pentium Pro at ~100 MHz - in 2000 a 600 MHz machine is likely a Coppermine PIII.
A fairly common amount of RAM in 1995 to Run WinNT would have been around 32 megs of ram, 64 megs would be especially generous. 128 megs is a high end workstation amount of memory.
The ATA interface also doubled in performance between 1995 and 2000.
There were significant security and stability improvements between NT 3.51 and Windows 2000 - particularly with changes to the driver model that increased stability. (even more so between 2000 and Windows 10/11)
I wonder if the 600MHz machine in this video is fitted with an SSD of some sort, this would load applications far faster than the average hard drive from 1998.
>For those thinking that the comparison was unfair, here is Windows 2000 on the same 600MHz machine. Both are from the same year, 1999. Note how the immediacy is still exactly the same and hadn’t been ruined yet.
This is addressed in the linked thread.
>For those thinking that the comparison was unfair, here is Windows 2000 on the same 600MHz machine. Both are from the same year, 1999. Note how the immediacy is still exactly the same and hadn’t been ruined yet.
Of course I'm not using any heavy duty applications, and I only play classic games for which even a Pentium is a bit of overkill.
A horrific piece of UI/UX/engineering/whatever.
Resizing makes the windows flicker disgustingly. Even the mouse cursor changing to the resize widget often draws a large glitched cursor for a single frame on every windows machine I’ve used sometimes.
If someone ever fixed this trash and made the OS feel solid and not a flickering glitchy mess I’d be in their debt.
I think due to perverse incentives it causes the exact opposite to happen. Why did the Windows calculator need to be remade with a much slower and less responsive version? Telemetry probably showed the calculator was frequently used, so a Project Manager targeted it for "improvement" and it was then ruined.
- the database and/or the Tomcat server have way too low RAM and start swapping like no end
- way too many people had admin access in Jira and installed a metric shit ton of plugins
- the AD configuration is messed up and instead of only user accounts it loads (and verifies) tens of thousands of user and machine accounts at each login
I see similar sluggishness opening command prompt on my Ryzen 3700X with 64GB RAM on Windows 11 22H2 with an NVME SSD. First it draws the outline of the window then fills it in with content. And that's repeatable!
Not always. AT&T goes through about 25 redirects to sign in. No SSO involved.
Another big chunk of this likely happened when they hardened the graphics subsystem for security. Win32 user calls are unbelievably expensive nowadays. SendMessage etc. have a ton of overhead.
Another chunk is likely the sheer number of expensive DLLs that need to be loaded and initialized with most apps. For example, IIRC, the moment you load COM or WinSock DLLs, your app stops loading snappily. Pretty much anything will load COM even without intending to.
Another chunk is IMM - the ctfmon process you love, for multi-language/keyboard support. ImmDisable(0) can make loading a bit snappier, but then good luck with keyboard switching and the like. It uses window hooks, which are slow Win32 calls as mentioned.
People think it's just a matter of writing plain Win32, but that's not the whole story, although it certainly helps compared to more heavyweight frameworks.
Dave's no excuses attitude towards performance and stability is sorely missed.
I know people who ran Windows Server 2003 as a client OS, which was the last version of NT that Dave was in charge of.
It's the fact that manpower can't keep up with the exploding amount of complexity and use cases that happened to computing in the last decades.
We went from CLI commands and a few graphical tools for the few that actually wanted to engage with computers, to an entire ecosystem of entertainment where everyone in the world wants 'puters to predict what could they want to see or buy next.
To maintain the same efficiency in code we had in the 90-2000s, we would need to instantly jump the seniority of every developer in the world, right from Junior to Senior+. Yes, you can recruit and train developers, but how many Tanenbaums and Torvalds can you train per year?
The biggest amount of cruft not only went to dark patterns and features in programs like animations and rendering that some people regard it as "useless" (which is debatable at minimum). But the layers went also to improve "developer experience".
And I'm not talking about NodeJS only. I'm talking about languages like Python, Lua, or even the JVM.
There's a whole universe of hoops and loops and safeguards made so that the not-so-genius developer doesn't shoot themselves in the foot so easily.
I'm sure that you can delete all of that, only leave languages like Rust, C and C++ and get a 100x jump in performance. But you'd also be annihilating 90% of the software development workforce. Good luck trying to watch a movie in Netflix or counting calories on a smartwatch.
cmd, control panel, and most of the things in admin tools launch virtually instantly.
This is for a machine that is running on relatively slow spinning disks too.
Terminal ~300ms.
iTerm ~400ms.
Calculator ~300ms.
Firefox ~1s.
Textedit ~200ms.
Slack ~1s to appear, ~5s to load.
Sequel Ace ~800ms.
Fork ~500ms.
All my subjective experience, but it's basically instant in experience. I think it also helps that it doesn't show fade in/out animations, which LOOK sluggish to me.
M1 MacBook Pro from 2020...
When apps have these kinds of interruptions all over the place, that's even worse than just having them at startup.
I also struggle with the comparison between high-end hardware of yesteryear, and low end hardware of today and comparing.
Try running win2k in 16mb, 300mhz P2, and a 4800rpm drive.
The only times I remember experiencing things this fast in my computing career were (a) with a fair wind, and a fully warmed cache that didn’t hit the disk & was a trivial app (b) the first time I used my Apple M1 Max MBP.
Imagine if you work on computers for hours everyday what the cumulative impact is
First, he's probably running those OSes on a monster of a machine, relatively speaking. Stuff is snappy on my desktop too, which has a PCIe4 NVMe drive and 128 GB RAM. Chromium, which is a huge and very complex application starts up in maybe a second or less.
When you're doing old school computing today it's easy to max out the specs to a point that would be unrealistic in that time. You can put 128MB RAM into a machine when most people in the day might have had 16MB. NT4 is from 1996, and that machine is from 2000, so seems likely. I remember computers back in the day. Windows 2000 ran slowly on the hardware I had.
Second, modern software does way more in the name of convenience. Eg, nobody really uses notepad. We use VS Code for instance, which is an enormous application, the sort that would have seriously challenged a computer back then.
And really everything grew in the same manner. Discord is big, but Discord draws its own widgets and includes a web browser. If you did Discord with native widgets, plain text and without all the fancy stuff like animated emoji and special effects, it'd be much lighter and faster to start too.
Part of the "problem" with Windows is also lack of legacy reliance. As in: MacOS and Linux are at heart Unix systems, with a kernel architecture meant for 1970s hardware. The Windows NT kernel family is a clean-sheet design from the 1990s, a time where compute resources were much more plentiful.
For example, on Linux file system access has (by default) very basic permissions, and uses a closely coupled file system driver and memory system in the kernel. On Windows there is a very rich permission system, and ever request goes through a whole stack of Filesystem Filter Drivers and other indirections that can log, verify or change them. This is great from a functionality standpoint: virus scanners get a chance to scan files as you open them and deny you access if they find something, logging or transparent encryption is trivial to implement, tools like DropBox have an easy time downloading a file as you access it without dealing with implementing a whole file system, the complex permission system suits enterprise needs, etc. But on the other hand all these steps make the system a lot slower than the lean Linux implementation. And similar resource-intensive things are happening all over the kernel-API in Windows, simply because those APIs were conceived at a time when these tradeoffs had become acceptable.
Win9x was better, you could run a few more apps without too many issues. But blue screens were super common. But it was fairly responsive if you didn't have too much running.
Win2000, yeah it could be that responsive.
What stands out to me, is Windows is very responsive the less stuff that is running, even today. Once those older machines had to do anything else you had to wait.
Those old apps sat much closer to the metal. I can't imagine how many layers sit between windows apps today, worse if they off Electron.
I read somewhere that Webkit and specifically Chrome are optimized towards more efficient CPU usage in the expense of a larger RAM use. Probably makes sense in terms of energy consumption but you need more RAM.
Around that time, whenever a new BlackBerry device was rebooted, it would take about 45 seconds for the OS to fully boot and become usable. I'm making up the actual times in this story because I can't remember the exact details now. Like in this video, the oldest devices would start up nearly instantly, but the boot sequence for modern devices was agonizing. Especially when you were an engineer trying to run a test that required rebooting the device!
One day, a pair of engineers finally became annoyed and curious enough to look into it and profile the problem. They found that a substantial portion of the boot time was spent parsing XML files that contained descriptions for device themes - that is, information about colors, icons, and text styles that users could choose from to customize their device. Something like a full 15 seconds was spent just on this one feature as part of the boot sequence!
So what happened? I suppose that no other engineers felt they had the responsibility or the time available to look into the long boot times until that point. But imagine being an engineer at a company like Microsoft, and one day saying to your manager or product manager: "Hey, can I spend a week on understanding why it takes a full second for Terminal to boot on Windows and see if we can speed that up?" How many managers are going to be enthusiastic about that when they're staring down a kanban board with enough tasks to fill 4 quarters?
The only reason these engineers at BlackBerry were able to look into the bootup times was because they didn't have to answer to anyone responsible for product. The team lead for this team at the time was "Engineer #2" at BlackBerry who had written all the apps for the first devices. His team was given permission to pursue whatever ideas or prototypes interested them, so they had the breathing room to chase leads like this that wouldn't necessarily pay off.
So part of the answer is that I don't think we as an industry are prioritizing performance. Once the performance is good enough, it's time to ship and move on to the next feature. But the other side is that we're building in more dynamic behaviour to our software. More customization, more flexibility, more behaviour that's defined at run-time. And this all probably makes software more useful and makes it easier for engineers to build more complex software. I think it's a similar kind of trade-off to choosing between python or Rust for a tool you're working on. What kind of performance do you need, and what kind of developer experience do you want?
Regardless, to answer "what happened?", people started demanding more from apps (dark mode, UI parity with web editions, rapid prototyping, simple development, cross-platform) and it became cheaper to use Electron (or Qt, or WinForms) than simple win32 calls.
I have been running a bare XMonad for a couple of years with `xset r rate 300 40`, and it's okay but far from perfect. The WM seems to process its mouse events with a delay sometimes, and often focuses the wrong window whenever the mouse pointer moves.
* initial page provided by the service you're logging into - this gets your email address so it can lookup your account and determine which SSO provider to redirect to
* actual login page served by your SSO provider - here you authenticate to the SSO provider. It can occasionally cause another page load to get your 2FA code if configured, or go through further identity checks
* final "page" that consumes the query parameters sent back by the SSO provider - this is often just a 302 redirect to the home page but sets a session cookie.
The main problem is that all these pages are super bloated, with tons of unnecessary JS and BS. All the code for login page that takes a username and password should be able to fit entirely on an A4 sheet of paper - it's literally just an HTML form and a few lines of CSS.
Furthermore, even beyond inter-company SSO, there are shitty companies out there which use such flows internally even though everything is part of the same security domain, hosted on the same infrastructure and thus can be hosted on the same top-level domain and use a single session cookie. Microsoft is a pretty bad one - Teams for example will use a redirect to some other Microsoft-owned domain to get your (already existing) Office 365 session; this is completely unnecessary, they can host all those things on the same top-level domain and reuse a single session cookie seamlessly.
Turning on a TV in 1975: wait 60 seconds for the tube to warm up.
Turning on a TV in 1985: wait 10 seconds for the tube to warm up.
Turning on a TV in 1995: wait 2 seconds for the tube to warm up.
Turning on a TV in 2023: stare at the LG logo for 20 seconds, then wait another 30 seconds "for Smart Services to become available" in order to change the channel.
Office suites have never been good, but office suites in like 2005 seemed to stretch systems to the breaking point.
Lots of consumer software has always sucked out of the box, I guess if you are here you were possibly a technically savvy kid at some point, is it possible that you were just more selective about the types of programs you ran when you were using the computer for fun?
While it can't easily be used cross-companies (and thus why SAML/OIDC exists), it's perfect for internal company infrastructure, and SAML/OIDC can still be handled somewhat seamlessly by having a minimal service that verifies your Kerberos identity and immediately dispatches you back to whatever third-party service you wanted to authenticate to, with no intermediate login pages or even any kind of UI (this service doesn't need UI because your authentication is managed via Kerberos for which your OS provides the UI).
The problem is that you can't make money (nor "growth & engagement") off stable, battle-tested stuff that already exists and happily works in the background, so Okta/etc shareholders need to peddle worse solutions that waste everyone's time and processing power.
I'm not sure why Msft put that CPU and RAM combo in their own device when it's just barely past the minimum specs for Windows 10 let alone 11.
It might be done for user retention reasons with the idea that people are more likely to use sites they're already signed into, but I really don't need to be signed into YouTube when I sign into my Google work account. Please just skip that and sign in a few seconds quicker.
Notepad? Kind of. Newer UI library so it handles display scaling a lot better. Handles different line endings and encodings much better now. Handles the system UI dark mode. The interface supports tabs.
Metrics-based and KPI-based software development has ruined quality for decades.
After all, why pay for an expensive DMS when you have Jira?
My guess is: yes, it will.
Somehow, in the past 15 years, "progress" seems to include "software keeps getting noticeably worse, but anyone pointing this out has to be shot down because progress."
I don't think I've ever had the mouse event delay issues that you're talking about, and I don't use focus-follows-mouse. But I still marvel at how quickly light programs open on all versions of Windows. I mean programs like terminal, notepad, various control panel stuff, MS default games, etc. I don't think you will find anything that runs on X that will be as responsive as Windows. I don't know if the Wayland universe is any better.
In other words, a modern 2 GHz processor would have time to execute at least one instruction between the time photons leave the screen, and the moment they reach your retina. Probably more than one, with multicore pipelined processors.
And yet today we wait and wait and wait for Windows to open a simple program.
Indeed... what happened?
Maybe it is a spinning rust disk. Even then there's a world of difference between a period accurate drive and a late model IDE drive. The last IDE drives had more drive cache than most desktops had RAM when NT was new.
On the other hand, a large part of the delay was due to the slow seek time of the magnetic hard drive (milliseconds). The CPUs only had one core, and RAM was both smaller and much slower. Modern SSDs make seek time negligible, ancient PassMark scores suggest a >10x improvement in single-core CPU performance, and there's been a >20x improvement in RAM transfer rate and a >40x improvement in RAM size. Residential internet bandwidth has seem something like a 100x improvement.
None of that hardware improvement seems visible in modern PCs, except for nicer graphics and (especially) higher-resolution displays. But comparing video games from ~2000 to video games today reveals just how small that graphical difference is in the OS/application space. MS Office was a lot more responsive in the early 2000s, too.
In the early days of Firefox, the developers bragged that every new release was smaller than the previous one. Maybe one day there will be a fad for more responsive software in general.
Sure sounds like there's a ton of gaps in things I really want out of my operating system on Windows 2000...
Also all of the support APIs and libraries for text editing and image display are orders of magnitude more complex now: eg Notepad supports unicode and emoji.
Old days: Cmd + f, type what you want.
New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.
Is just a list of dishes, some with small thumbnails, some without any images at all. If you can't load a page with 30 dishes fast enough, you have a serious problem (you could always lazily load the thumbnails if you want to cheat).
Put Windows 2000 on that thing and see if it runs just as well.
Devs at MS have to make everything right in an universe where everything else is dead or crap. And the fact that Windows 11 can even run without crashing daily is an engineering marvel.
PD: Not to defend MS, but I'm sure their current devs are very capable and doing their best.
Does the switch to 64 bit slow things down enough to explain what happened between Windows 2000 and XP?
Does the operating system have to support virtual machines? Seems easy enough to install vmware then run operating systems inside it for most use cases.
I mean, you can keep 'what if'ing me here, but, is it really worth having all the features that you, clearly as a power user or professional, use installed on every computer everywhere? No. No it really doesn't. It's bloat.
It's extremely hard to do that in recent versions of Windows. The most I managed to do the last time I tried was to disable it temporarily but it always comes back after a while.
You are looking back with rose tinted glasses if you think all software was blazing fast back then. There was a reason putting your cursor on a progress bar to track whether it was moving was a thing.
1. Windows Defender anti-virus checking the binary and contacting MS' server for binary signing/black list
2. Kernel32 trampolines
3. All sorts of security mitigation techniques such as stack cookies setup etc.
4. Telemetry
5. Hardware accelerated GUI initiation vs 'dump everything to frame buffer in kernel GUI32 library'
6. Load fonts, graphics etc. that can work well beyond 640x480
7. Deal with the scheduler juggling hundreds of processes that let you from accessing winsock2 lib immediately, to have multiplex sound mixing, to system restore, all in the background.
Back in the day when I was rocking a Pentium, it impressed me how God damn FAST X was. Windows NT (3.51 and 4.0) seemed sluggish in comparison.
Are you sure this happens every time you start an executable? I assumed the definition list gets updated in fixed intervals instead of whenever you launch a program.
Can you name a few that could explain the 1,000x performance cost?
Also, have you heard the story about the guy who told MS that their terminal was shit and could be fixed, only to be ridiculed by a fleet of "super elite 500k/year engineers" that in the end turned out to be ... wrong?
Bloat is intentional and fills Microsoft’s wallet.
Optimization drains Microsoft’s wallet.
That's a way different experience than running Hyper-V.
> is it really worth having all the features that you, clearly as a power user or professional, use installed on every computer everywhere? No. No it really doesn't. It's bloat.
I also didn't realize that managing WiFi networks or using display scaling are things only power users and professionals would want on their machines. I guess supporting Bluetooth natively in the OS and a modern sound stack is just bloat for most people.
But I think the reason that most modern software performs badly is because of optimization: we're optimizing to reduce production costs over increasing performance.
It's economic in nature. We minimize production costs by using frameworks and other labor-saving tools. The code produced using these tools tends to be poor, but hardware is cheap enough to make up for poorly performing software.
It's an intentional decision.
Let me just hop on my wifi and browse the web. Lets do it on a computer from 1999. 2000. 2001. 2002. 2003. 2004. 2005. 2006. 2007. 2008... etc, etc.
Why is it that every couple years from 2012 onwards, doing the same thing keeps taking longer, even with new hardware, without the same revolutions in quality and experience that came with that new software previously?
If you search the startmenu for "advanced system settings" it will pull up a control panel era System Properties app with an Advanced > Performance option. Turning off visual effects there dramatically increases responsiveness.
Windows 2000 was quite the hog compared to NT4 and all it added that I had a use for was USB support. I think by that point Dave Cutler was no longer running the show and windows performance slowly started degrading.
It's been bothering me for some time ever since I noticed it with the advent of IoT "smart" devices that have the same features as traditional appliances for twice the cost and technical debt. My washing machine still washes clothes and turns itself off, but now I need to set the wash type using my phone because lateral progress dictates a physical interface on the device itself is obsolete.
Kerberos is also difficult to administer and secure (Golden Ticket?). Kerberos also requires the target service be a member of the Kerberos Realm (or otherwise trusted) which again means line of sight between the service and TGS or Realm to Realm.
And then we get into the whole ticket size issue.
Kerberos is not a good candidate for web-based AuthN.
Microsoft has essentially turned the OS into one of those websites which show ads, news letter dialogs, cookie notices, location permission requests, notification requests and so on constantly.
Worse thing is the only reason we have to use it is to log our hours. Because the CIO wanted us to be "agile". Apparently logging ones hours in Jira makes us "agile". Yeah I don't know how either. Someone ticked a nice box there for themself. Now we're just creating a useless swamp of data that has no meaning because there are no guidelines on how to log everything. Normally when you implement the full process that stuff is straightforward because you have things in other places in Jira to link to. Not in this case. The only thing we have achieved is making Atlassian a bit richer.
The same with "cloud". We had to be "on cloud". So what do they do? Migrate every physical server. Every time we need a new "server", we still have to fill in the same 18-page excel sheet. Only the tab with the physical rack location has been replaced with one with AWS locations. We still have the delay of several weeks of approvals and everything runs 24/7, nothing scales automatically or is auto provisioned. This is not "cloud". This is fooling oneself. And paying too much. We're technically in the cloud but we don't take advantage of anything it's actually good at. Paying only for resources we actually use? Nope. Auto scaling demand? Nope. Quick provisioning? Lol you wish. And we can't because the infrastructure architect team has locked everything down so nothing can be automated. They only trust themselves to that as they are the high priests.
It's really time for megacorps to stop trying to be like a startup. It doesn't work, unless you basically rebuild the entire org from the ground up. Which will never happen because it will disrupt too much. Too much legacy, too many strings attached to "the business". Too many processes that will never be changed because it means the entire org would have to change.
Just work with what you have and improve that instead of trying to pretend you're something else.
Back in the day plenty things like Photoshop had splash screens, and you got to stare at them for quite a while.
I think this blame is fair. Electron is the most obvious example, but in general desktop software that essentially embeds a full browser instance because it makes development slightly easier is the culprit in almost every case I've experienced.
I use a Windows 10 laptop for work.[1] The app that has the most lag and worst performance impact for as long as I've used the laptop is Microsoft Teams. Historically, chat/conferencing apps would be pretty lightweight, but Teams is an Electron app, so it spawns eight processes, over 200 threads, and consumes about 1GB of memory while idle.
Slack is a similar situation. Six processes, over 100 threads, ~750MB RAM while idle. For a chat app!
Microsoft recently added embedded Edge browser controls into the entire Office 365 suite (basically embraced-and-extended Electron), and sure enough, Office is now super laggy too. For example, accepting changes in a Word doc with change tracking enabled now takes anywhere from 5-20 seconds per change, where it was almost instantaneous before. Eight msedgewebview2.exe processes, ~150 threads, but at least it's only consuming about 250MB of RAM.
Meanwhile, I can run native code, .NET, Java, etc. with reasonable performance as long as the Electron apps aren't also running. I can run multiple Linux VMs simultaneously on this laptop with good response times, or I can run 1-2 Electron apps. It's pretty silly.
[1] Core i5, 16GB RAM, SSD storage. Not top of the line, but typical issue for a business environment.
This doesn't strike me like something "everyone in the world wants", but rather something a small group of leaches is pushing on the rest of the population, to enrich themselves at the expense of everyone else. I'm yet to meet a person that would tell me they actually want computers to tell them what to see or buy. And if I met such person, I bet they'd backtrack if they learned how those systems work.
Exercise for the reader: name one recommendation system that doesn't suck. They all do, and it's not because recommendations are hard. Rather, it's because those systems aren't tuned to recommend what the users would like - they're optimized to recommend what maximizes vendor's revenue. This leads to well-known absurdities like Netflix recommendations being effectively random, and the whole UX being optimized to mask how small their catalogue is; or Spotify recommendations pushing podcasts whether you want them or not; or how you buy a thing and then get spammed for weeks by ads for the same thing, because as stupid as it is, it seems to maximize effectiveness at scale. Etc.
> I'm sure that you can delete all of that, only leave languages like Rust, C and C++ and get a 100x jump in performance. But you'd also be annihilating 90% of the software development workforce. Good luck trying to watch a movie in Netflix or counting calories on a smartwatch.
I'll say the same thing I say to people when they claim banning ads would annihilate 90% of the content on the Internet: good. riddance.
Netflix would still be there. So would smartwatches and calorie counting apps. We're now drowning in deluge of shitty software, a lot of which is actually malware in disguise; "annihilating 90% of the software development workforce" would vastly improve SNR.
The reason it's not a thing today is because those progress bars got replaced by spinners and "infinite progress bars". At least back then you had a chance to learn or guess how long slow operations would take. These days, users are considered too dumb to be exposed to such "details".
Of course the NT 4.0 was a bit faster, but not that much with "common" programs (Office and similar).
The occupation on disk of the OS was however 3x (NT 4.00 was around 180 MB, 2K around 650 MB).
Programmer comfort, unified frameworks, higher level languages over user experience.
Focus on end users instead of professional users.
Stupider programmers and programming through "good enough", "everyone can code" and "salaries are too high, hire someone cheaper".
Computers getting cheaper, therefore users buying new machines to run software faster, instead of programmers trying to get stuff running fast on current hardware.
One of the reasons is the whole "software is a gas" thing. As long as there is faster hardware, more memory, more storage, software will get slower, more bloated, and take up more space, just because a gas always fills its container.
But another reason is there's more people in tech who don't know what they're doing. More people who took a bootcamp and jumped into a job, or came from some other career and barely know how to use a computer, never used Linux/UNIX. Some newer roles have very specific niches, where they don't know much about tech, and then they're asked to write code, which they have almost no idea how to do. I've recently worked with colleagues who were contributing code, and getting in the way of building the product, who shouldn't have been within 10 miles of an IDE. And when the senior developers don't know how environment variables work, I weep.
To illustrate the CPU/disk-access ratio: There's a reason for scripting languages becoming prevalent for web backends in the 1990s. Loading a script source from disk and precompiling it on the fly was still faster than loading a much bigger binary from disk – which had to be done on each CGI request. (E.g., with Perl, you could have your normal script, but you could also produce an executable binary from a core. But nobody did the latter for the exact reason.)
Yes, but still it seems to be useless to implementers, because practically every virus scanner implements braindead stuff like DLL injection for on-access-scanning.
Only to discover that 2/3 of the matches are invisible text that's put there for $deity knows what reason, and the rest only gives you the subset of what you want, as the UI truncates the list of ingredients/toppings and you need to click or hover over it to see it in full.
Tabbed interface
Support for command interpreters other than just CMD
Multiple profiles for different interpreters and settings
Support for a much wider range of console control characters and terminal emulations (ssh'ing into linux boxes works really well)
Way better resizing support
Clickable URL detection
More (and customizable) keyboard shortcuts
Support for background images
Support for transparency
Configurations as easy to transfer JSON files
Copying text is a way better experience
Just a few of the features that I use all the time. I can't stand using cmd.exe anymore, its an absolutely miserable experience in comparison.
The big stuff like Photoshop loads roughly as long as it did before. This is a good indicator of how bad most software is - if applications like Photoshop followed the same performance/feature curve, they'd take hours to start.
You can either do the redirects all at once on login or do them once you use the service first. Since login is already a time-consuming process (username, next, password, next, 2FA, next) I think you may as well take a second to add the redirects and be done with it.
It doesn't make much sense for Google work accounts but it makes sense if those are a minority on the platform. They could definitely patch this out, but then again the login process is something that takes a second extra every month or so, so who really cares.
What does bother me is how every service wants you to enter your username and password separately now. Autofill gets confused and sometimes even stops working because the stupid hidden input fields for the password don't get shown until you click the magical "next" button, just in case you need a special third party auth service.
Either decide that work accounts are important and take out the extra YouTube redirect, or decide they aren't important and let me fill in my username and password on a single form. Both make complete sense individually but combined they're just a massive waste of time.
And that's why modern hardware keeps getting faster while modern software stays the same speed (or slower).
In the past it was understandable, because those primitive 130nm, single-core CPUs had to keep spinning since performance scaling was still a relatively new thing back then.
Despite all the advancements in the field it seems like more often than not each of those cores is up to something at all times.
First thing I do with a new Android device is to limit the number of background processes. This causes occasional crashes of some apps, but the difference in general smoothness is noticeable.
Meanwhile my laptop draws 30W seemingly doing nothing in particular.
Launching HWiNFO halves that number which, if actually true, is properly insane.
SSDs mitigate those issues but it is so painful to run things on mechanical drives, a lot of which is down to the antivirus processes. The practical realities have changed.
(Also things being snappy and fast I don’t think is a common memory of people when the machines the author is writing about were contemporary. The world of software is much bigger than notepad and cmd.exe)
Then again, I guess any other OS might break in same way. Like my Debian VM just kinda stops responding to part of the screen sometimes if programs are maximised...
They should probably run some sort of antivirus too; this is built-in for Windows now, ya? That'd be my first guess for the small programs. My memory of those days doesn't feel any faster than today, but I never had anything as top-of-the-line as a 600Mhz chip in 1999, more like half of that.
But the real solution is better web page design.
I suspect that the amount of time I spend on just logging in to websites each day is upwards of 5 minutes, and I doubt it will decrease over the coming decades. Such a waste.
Why does it follow that software designed for modern hardware, running on modern hardware, should be slower than software designed for older hardware running on slightly newer hardware?
Ruby on Rails may not be the poster child for speediness as things get big or complex, but if you aren't fighting the ORM, it's consistently quick from click to data.
Also, RoR is definitely not dead.
By requiring more than that, we had to increase the essential complexity. I believe this tradeoff in itself is well worth it (and hopefully we can all agree on that going back to us-ascii-only locale is not a forward direction).
The problem I see is that the layers you also mention, each expose leaky abstractions (note that abstractions are not the problem, no person on Earth could implement anything remotely useful without abstractions — that’s our only tool to fight against complexity, of which a significant amount is essential, that is not reducible). Let’s also add a “definition” I read in a HN comment on what constitutes an ‘expert’: “knowing at least 2 layers beneath the one one is working with” (not sure if it was 1 or 2).
Given that not many people are experts and a tendency of cheaping out on devs, people indeed are only scratching that top layer (often not even understanding that single one!), but the problem might also be in how we organize these layers? When an abstraction works well it can really be a breeze and a huge (or only significant, see Brooks) productivity boost to just add a library and be done with it — so maybe the primitives we use for these layers are inadequate?
Actually much worse as Microsoft once did with their COM model, ActiveX based on MFC foundation classes with C++ templates, etc.
And to build those interactive programs somebody is trained to use React, Vue, etc. using their own eco systems of tools. This is operated by a stack of build tools, a stack of distribution tools, kubernetes for hosting and AWS for managing that whole damn thing.
Oh - and do not talk even about Dependency Management, Monitoring, Microservices, Authorization and so on...
But I really wonder - what would be more complex?
Building interactive programs based on HTML or Logo (if anybody does remember)?
Windows 11 min specs: 1GHz 64-bit CPU, 4 GB RAM, 64GB disk space.
This is a horribly unfair comparison. We were complaining about systems being slow AF back then too. (Insert _turbo button_ meme here)
As a bonus, in Firefox if you hit the ' key (apostrophe) you get a search that looks only within hyperlinks and ignores all the un-clickable plain text. Give it a try, sometimes it can be very useful
Well, all of that type of bloat is presumably useful to someone or it wouldn't have been written. That doesn't change the fact that there's a cost for including it.
> Windows indexes files in the background
But here's an example of the tradeoffs. I hate this behavior. It incurs an overhead that provides no benefit that matters to me. So, your useful feature is my useless bloat.
Everything's a tradeoff.
Still, there's a point to make about bloat but the comparison seems to be between Apples and Oranges
I'm not sure why Windows minimum hardware requirements are relevant at all. If they were, they could get massive performance improvements by raising the hardware requirements. "Sure it's slow, but it's running on literally 1% of minimum recommended RAM!"
That depends on your scale. If your product is "large enough" it is relatively easy to get into the range of several seconds of response time.
Here are some of the steps you may want to execute before responding a resquest to your user:
- Get all the dishes that have the filters the user selected
- Remove all dishes from restaurants that doen't delivery in the user location
- Remove all dishes from restaurants that aren't open right now
- Get all discount campaigns for the user and apply its effects for every dish
- Reorder the dish list based on the history of the user interactions
Now imagine that for every step in this list you have, at least, a single team of developers. Add some legacy requirements and a little bit of tech debt... That's it, now you have the perfect stage for a request that takes 5-10 seconds.
Here's a much more apt comparison (still really snappy):
https://twitter.com/jmmv/status/1672073678102872065/mediaVie...
Except, that they are not, not at the time they are launched at least. And even if they were, we have a hundred-fold more compute power, with a hundredth of the latency for memory and storage.
Regarding security, it should have negligible effect in most cases. At least, effects should not be perceptible to the human mind.
It really is just a consequence of the way we develop software nowadays. We do not need to optimize programs to make them work at all, so we just do not. We work on new features, and we hire people who can churn new features.
And we decided to optimize for developer time, instead of user time. So, instead of painstakingly developing a Web site, a native application, an Android app, and an iOS app, we just push Web apps everywhere.
Fixing this situation is essentially impossible because it requires rewriting almost everything that modern Windows is built on. Someone else in this thread said you couldn't sell 4 quarters worth of work to fix this, but the reality is that it requires infinite quarters, because it requires throwing away the last 10 years of Windows shell and UI work and that will never happen. You could paper over it by applying performance spotfixes here and there, but it'll never go back to how it could be that way. At a minimum, you'd essentially have to throw away WinRT which has an almost viral negative impact on performance. Never before have high latency, but still synchronous cross process RPCs been that prevalent and everything's a heap allocated object, even if it's within the same binary. It's JuniorFootgunRT.
What really drives me mad is the latency of some file selection dialogs for example which can take like 10 seconds.
Explaining it to the SecOps person, that was painful though.
That’s exactly it, and there’s no shame in that. I can, as a solo developer, build a fully featured app with a responsive UI and produce artifacts that run on Windows, Linux, and Mac. I can do that in a weekend, because of the technologies we have at our disposal. Something that would have taken a team of developers several months to do.
On the other hand, the fact that we’re abstracting everything except the business logic away is a big advantage. As soon as Chrome pushes a performance update we can see apps across the board performing 10% faster.
Seriously recommend trying it out - more responsive than any OS I have ever used - even a lean running Ubuntu or OSX
Now things still work, but the TV needs a restart about once a week if I use the built in apps like YouTube or Netflix.
Finally bought an Apple TV 4K and it "boots" directly into it. Such a relief! (Well, except for that bizarre touch remote that came with the Apple TV, but that's a small price to pay!)
This is also why your browser will stall out when it finishes downloading a large file. Windows Defender kicks in an does a full scan before returning from the close call.
Screenshot: https://blog.codinghorror.com/content/images/uploads/2005/07...
I mean maybe your DB is a single node running on a potato and your load's very high but you're also somehow never hitting cache, but otherwise... no, there's no good reason for that to be slow.
[EDIT] Your last paragraph is the reason, though: it's made extremely poorly. That'll do it.
It's not an easy task, and it's not something that anyone has really done. There are plenty of single platform examples, and Flutter is about as close as you can get in terms of cross platform.
There are also alternatives that can use the engine of an installed OS browser. Tauri is a decent example for Rust. Also, Electron isn't to blame for the issues with Teams. VS Code pretty much proves you can create a relatively responsive application in a browser interface.
If memory serves me they did not really change much in NT from 4.0 to 2k. Other than add in more services and make it more win98 like. So it is maybe not an 'unfair' comparison. But win 3.51 came out getting that sort of computer just would not be in the cards for most people.
Windows went sideways at vista. The 'start the computer up' out of the box would use 2-3gig of ram. Up from 100-200MB from the XP era. Toss in some corp bloatware items. One place I saw it was 10gig just to open the desktop no productivity software even started yet. Then add in the zillions of indirect layers we have added to make programming easier and we are now with applications that seem to start at about the same rate as 25 years ago.
All of those old API's are still there. No one really uses them much anymore. We use the latest cool frameworks. That use the previous cool framework that eventually uses the old APIs :)
I still have this instinctual reluctance to change screen resolution in a game's setting screen, even though 99% of the time it's an instantaneous thing these days.
If you edit the same 1KB file on each computer side by side the 30 year old computer will be more responsive than the modern one.
That's what people are taking issue with.
Compare to iPhone OS 1: apps had static launch images that the OS animates to be visible so an app has hundreds of milliseconds to load before the user feels a hang.
We say that today, and remember the best part of the experience... but we do forget, it was all at the mercy of your (maybe if you're lucky) UltraATA/33mhz bus.
That "slightly" is doing a massive amount of heavy lifting in that sentence.
I run a company on the side that produces software for events which require a website and mobile apps for iOS (iPhone and iPad)/Android. I cannot imagine being able to do this all on my own without being able to share a codebase (mobile apps built via Capacitor) across all of them. Would native apps be faster? Almost certainly but I'm not going to learn Kotlin and Swift and triple the number of codebases I have to work it. It's completely infeasible for me, maybe some of you are able to do that but I'm not, there aren't enough hours in the day.
I fully understand the cruft/baggage that methods like this bring but I also see first-hand what they allow a single developer to build on their own. I'll take that trade. I'm a little less forgiving of large companies but Discord and Slack (and other Electron apps) work fine for me, I don't see the issues people complain about.
Linux also opens apps instantly, at least for me and if you ignore that some app after being opened instantly don't get instantly ready to work.
But that isn't fault of the OS.
Through some OSes used some tricks to hide this loading time.
A good example is cold starting a web browser. Modern web browsers have to handle so much that just loading the amount of code they have all at once can lead to a noticeable delay. I mean e.g. the network code your browser runs is likely a few hounded to thousand times more complicated then what you put into a simple HTTP server. There is also the rule of thumb that the difference between something working in your use-case and it working in as many situations as possible for as many people as possible might look small from an external POV but in general comes with an explosion in complexity and code. And for browsers that is not just the case for networking but also rendering HTML+CSS, executing JS, storage handling, window handling, handling boundaries and security, extensions, caching, input handling etc.
It has been done many times.
Not only that, if you want to use a web page for a GUI, then do it by making a local web server back end and just use the web browser.
This idea that electron is somehow the only way to get cross platform GUIs is some sort of bizarre twilight zone where a bunch of people who only know javascript ignore that last three decades of software.
But it was different...
Yeah. It was. That's exactly my point.
A major problem is the number of places in our code stacks where developers think it's perfectly normal for things to take 50ms or 500ms that aren't. I am not a performance maniac but I'm always keeping a mental budget in my head for how long things should take, and if something that should be 50us takes 50ms I generally at some point dig in and figure out why. If you don't even realize that something should be snappy you'll never dig into why your accidentally quadratic code is as slow as it is.
Another one I think is ever-increasingly to blame is the much celebrated PHP-esque "fully isolated page", where a given request is generated and then everything is thrown away. It was always a performance disaster, but when you go from 1 request to dozens for the simplest page render it becomes extra catastrophic. A lot of my web sites are a lot faster than my fellow developers expect simply because I reject that as a model for page generation. Things are a lot faster if you're only serving what was actually requested and not starting everything up from scratch.
Relatedly, developers really underestimate precomputation, which is very relevant to your point. Your hypothetical page layout is slow because you waited until the user actually clicked "menu" to start generating all that. Why did you do that? You should have computed that all at login time and have it stored right at your fingertips, because it is a reasonable assumption given the sort of page you're talking about that if the user logged in, they are there to make an order, not to look at their credit card settings. Even if it expensive for reasons out of your control (location API, for instance) if you already did the work you can serve the user instantly.
Having precomputed all this data, you might as well shove it all down to the client and let them manipulate it there with zero further network requests. A menu is a trivial amount of information.
It isn't even like precomputation is hard. It's the same code, just running at a different time.
"But what about when that doesn't work?" Well, you do something else. You've got a huge list of options. I haven't even scratched the surface. This isn't a treatise on how to speed up every conceivable website, this is a cri de coeur to stop making excuses for not even trying, and just try a little.
And it is SO MUCH FUN. Those of you who don't try you have no idea what you are missing out on. It is completely normal on a code base no one has ever profiled before to find a 50ms process and improve it to 50us with just a handful of lines tweaked. It is completely normal to examine a DB query taking 3 seconds and find that with a single ALTER TABLE ADD INDEX cut that down to 2us. This is the most fun I have at work. Give it a try. It's addictive!
IIRC Win8's also the first Windows I found unusable on spinning rust. Part of it may have been AV, but things like opening the start menu had significantly worse delays there than on a flash disk. It seems like they'd simply disregarded all development/design discipline about disk I/O, across the OS.
... which, you keep doing that everywhere, lots of devs making lazy choices to just grab this from disk here or just write a little data synchronously there, and it'll add up to non-negligible delay, even on a flash disk. And it'll make an HDD craaaawl. Which is exactly what happened.
Consider the feature set of JavaFX when used in combination with the AtlantaFX theme/widget pack. It isn't well known, but is maintained and has an active open source community today.
- All the same controls as mui.com shows and more advanced ones too, like a rich text editor, a way more advanced table view, tree views, table tree views, etc.
- Media and video support.
- 3D scene graph support. HTML doesn't have this! If you want to toss some 3D meshes into your UI you have to dive into OpenGL programming.
- When using FXML, semantic markup (<TabView> etc)
- Straightforward layout management.
- A dialect of CSS2.something for styling, a TextFlow widget for styling and flowing rich text.
- Fully reactive properties and collections, Svelte style (or moreso).
- Icon fonts and SVG works.
- Sophisticated animations and timelines API.
And so on. It's also cross platform on desktop and mobile, and can run in a web browser (see https://jpro.one where the entire website is a javafx app), and can be accessed from many different languages.
Flutter is actually not quite as featureful in comparison, for example there's no WebView control or multi-window support on desktop, though Flutter has other advantages like the hot reload feature, better supported mobile story. The community is lovely too.
Then you have AppKit, which is also very feature rich.
So it's definitely a task that people have done. Many of these toolkits have features HTML doesn't even try to have. The main thing they lack is that, well, they aren't the web. People often find out about apps using hypertext and being able to have a single space for documents and apps is convenient. When you're not heavily reliant on low friction discovery though, or have alternatives like the app stores, then web-beating UI toolkits aren't that big of a lift in comparison.
> Electron isn't to blame for the issues with Teams. VS Code pretty much proves you can create a relatively responsive application in a browser interface
Electron is great, but most apps aren't VS Code. On my 2019 Intel MacBook Terminal.app starts in <1 second and WhatsApp starts in about 7 seconds. Electron is Chrome and Chrome's architecture is very specifically designed for being a web browser. The multi-process aspect of Chrome is for example not a huge help for Electron where the whole app is trusted anyway, though because HTML is so easy to write insecurely, sandboxing that part of it can still be helpful even with apps that don't display untrusted data. That yields a lot of overhead especially on Windows where processes are expensive.
(Also there was a period when a lot of laptops used non-Intel x86 implementations, which typically weren't very good. Cyrix, Via et al.)
I have fond memories of that, but basically the editor was a UI into a linked list with a blue screen. So not comparable to what people are being asked to do with Word and 365 today.
My personal beef with Word is that it struggles so much with long documents. Trying to read, say, a 300 page spec from 3GPP is miserable.
> effectively ~90% of the startup duration is spent starting up WinUI and having it draw the tab bar and window frame
I listed "Display scaling support", "Tabbed interface", and "transparency". Is none of that related to WinUI and drawing the tab bar?
It's pretty funny how a pair of crappy videos I recorded in 5 minutes have gone viral and landed here. I obviously did not expect that this would happen and is why I didn't give a second thought to the comparison. There are many wrong things in there (including inaccuracies, as some have reported), and Twitter really doesn't give room to nuance. (Plus my notifications are now unusable so I can't even reply where necessary.)
I don't want to defend the "computers of 20 years ago" because they sucked in many aspects. Things have indeed gotten better in many ways: faster I/O, better graphics and insanely fast networks are a few of them, which have allowed many new types of apps to surface. Better languages and the pervasiveness of virtual machines have also allowed for new types of development and deployment environments, which can make things safer. Faster CPUs do enable things we couldn't do, like on-the-fly video transcoding and the like. The existence of GPUs gives us graphics animations for free. And the list goes on.
BUT. That still doesn't mean everything is better. UIs have generally gotten slower as you can see. There is visible lag even in fast computers: I noticed it on a ~2021 Z4 workstation I had at work, I noticed it on an i7 Surface Laptop 3 I had, and I still notice it on the Mac Pro I'm running Windows 11 on (my primary machine). It's mind-blowing to me that we need super-fast multi-core systems and GBs of RAM to approach, but not reach, the responsiveness we used to have in native desktop apps before. And this is really my pet peeve and what prompted the tweets.
Other random thoughts:
* Some massive wins we got in the past, like the switch from HDDs to SSDs, have been eaten away and now SSDs are a requirement.
* Lag is less visible on macOS and Linux desktops as they still feature mostly-native apps (unscientific claim as well).
* The Surface Go 2 isn't a very performant machine indeed, but note it ships with Windows 11 and the lag exists out of the box, so that makes it enough to qualify as a fair comparison. The specs I quoted were wrong though because I misread them from whichever website returned them to me. I don't care though because this is the experience I get on all reasonably-modern machines.
* Yes, I had opened the apps in both computers before running the video, so they were all cached in memory (which puts the newer system in a worse light?).
* One specific thing that illustrates the problem is Notepad: the app was recently "rewritten" (can't recall exactly what the changes were). It used to open instantaneously on the Go 2, but not any more.
* NT 3.51 wasn't truly fair game because it was years-older than the machine. But if you scroll down the thread you'll see the same "test" rerun on Windows 2000 (released same year as the hardware).
I might come back to extend the list of random thoughts. A proper follow-up blog post would be nice, but I'm not going to have time to write one right away.
Also that threshold is an entire 400ms. We should expect significantly better than that these days.
On the other hand, I very much like the F-Droid store. There are so many useful and user-friendly apps. They work and stay like this for a long time.
I suspect the underlying topic here is money. Open source apps don't get much funding. So the developers need to focus on the essentials and get them right. On the other hand, subscription-based apps have a steady inflow of money. For some reason, they need to constantly work on these and add new features with marginal utility. I regret having updated some iOS apps. They were working perfectly fine in 2018 but have added bloat and bugs since then.
Go to login page
Solve capcha to get to login prompt
Enter user name, get sent to next page
Enter password
Enter MFA code
Failed, try resynching MFA token
Repeat login process
Failed, try rescynching MFA token again
Failed, repeat login process and then go to troubleshoot MFA link on mfa page
Enter password again
Go to altternative factors link
Click link to send verification email
Didn't get email, click link again
Click link in verification email
Click link on login page to get a phone call with a code
Get a call but it doesn't give me a code, try again
Still no code, try again
Get a code this time from the call but the code fails verification, try again
Get a code and it gets verified, sends me to a login page
Solve a capcha
Enter username and password, get logged in
Fortunately, I have found a solution to ensure this series of issues does not reoccur.
I bought a newer PC because my older one from 2012 legit wasn't fast enough for what I wanted to do. It couldn't handle the VR applications I wanted to run, as its PCIe and RAM performance just wouldn't be up to the task to run the resolutions, texture qualities, and latencies I wanted. The newer one is miles ahead of the older hardware, and the applications I use are significantly better because of it.
But even then, from the other perspective of continuing to run similar-ish workloads using newer software, a lot of the other things continue to run the same experience-wise with slightly better features than when the software was new. When I first built that 2012 machine I installed the then brand-new Windows 8 on it. These days its running Windows 10. From a UX perspective it definitely feels faster than the OS it shipped with. Things like the new Terminal app are way better functionally than the old cmd.exe that used to be on it. I do demand more still from using VSCode with more plugins and what not than before while previously I used things like PyCharm more. I videochat, watch more streaming content on it than when I first had it, and it consumes far more animated GIFs and what not than it used to.
But in the end even with software supposedly getting more bloated and what not, its at least as snappy if not more than it was when it was brand new in 2012, other than the fact there's a whole new class of application I demand from my hardware.
So yeah, even today things are still getting better, doing more, and getting faster. Its not the extreme doubling or quadrupling of stuff like the 80s and 90s where things literally went from only text interfaces to GUIs to 3D apps, but there's still bleeding edge stuff that legit just takes more oomph than a box from 2012.
Also, if you work on a website, the Google crawler seems to allocate a certain amount of wall time (not just CPU time) to crawling your page. If you can get your pages to respond extremely quickly, more of your pages will be indexed, and you're going to match for more keywords. So if for some reason people aren't convinced that speed is an important feature for users wanting to use your site, maybe SEO benefits will help make the case.
There is speed, then there is the perception of speed. Crystal got this right.
If bloat is useful, notepad in any form on any version of Windows is bloat by definition.
You've got CLI editors that are smaller.
But, more generally, the problem is that tech stacks are an awful amalgamation of uncountable pieces which, taken by themselves, "don't add much overhead". But when you add them all together, you end up with a terribly laggy affair; see the other commenter's description of a web page with 30 dishes taking forever to load.
Now, loading screens are everywhere. Like the Atari + cassette era.
Display scaling is very fast in GDI apps and has no impact on launch time, a tab bar is essentially just an array of buttons (minimal impact on launch time?) and transparency is a virtually cost-free feature coming from DWM. I wrote a WinUI lookalike using its underlying technology (Direct2D and DirectComposition) directly one time and that results in an application that starts up within ~10ms of CPU time on my laptop, quite unlike the 450ms I'm seeing for WinUI. That is including UIA, localization and auto-layout support.
Also, I never said Electron was the only way... I specifically mentioned Tauri in my comment as an example of a browser renderer. And it doesn't need to use a local web server either.
Snow leopard with a intel dual core 4 gigs of ram and a SSD performs as well as a MacBook 2019 i7 running 11.x Monterrey.
So much unnecessary processes in the background IMO
Keep it simple stupid whenever possible
And another thing: security. Security on old Windows was poor. There are many things that need to be taken care of, but they used to not exist or were simply unsafe. This is another thing, one need to keep in mind, if one wants to just install windows 7 or XP and play games (as I did)
I don't dispute that very bad javascript can cause problems, but I don't think it's the virtualization layers or the specific language that are responsible for more than a sliver of that in the vast majority of use cases.
And the pile of build and distribution tools shouldn't hurt the user at all.
I also said, "Flutter is about as close as you can get" regarding coming close to what I was referring to.
AppKit is NOT cross-platform. Beyond this, you have other means of embedding a browser-ui application without all of chrome included, see Tauri as one example.
What are the specs on the machine you're using?
And it doesn't need to use a local web server either.
Shipping an entire browser so someone can pop up a single window is not a positive. Again, if you want html as your interface, use html and let people use their own browser so that the entire program is 400KB instead of 400 MB
It's a standard meant for system A to authenticate a user with system B. Ever logged in to a website with your Google account, or seen those permission screens asking you if you want to allow a third party website to access your Google account? That's OAuth.
Now, as to why many websites do this even when you login with credentials for that system (and not third party auth) - my guess is the system has separate teams for each subsystem, each hosted on different subdomains. In order to transfer auth state from one subdomain to another, you need something like OAuth since cross-domain cookies are forbidden by the browsers.
O RLY?
Do you have a transparent terminal with a background image? (If so, well ... to each its own :^))
Do you transfer your JSON config files "all the time"?
>Copying text is a way better experience
It literally is Ctrl-C and it's been like that for ages. When did it become a "way better experience"? I missed that.
Modern UX is absolute trash performance-wise. And you're falling into the same pit as the geniuses of the story I mentioned before.
Isn't this the same with any SSO provider? The SSO provider must be reachable by the end-user's browser during any authentication operation.
(in case of KB it must be reachable by the target services too, but in a server-to-server environment it's less of an issue)
> Golden Ticket
Isn't this exactly the same to let's say a session cookie of a web-based IdP?
The IdP could apply policies on its backend that bind the cookie to a given IP address, user-agent or other indicators, but can't this also be done for Kerberos tickets using a server-side middleware on every service you wish to access (since KB is internal-only, it shouldn't be that big of a deal)?
Two threads in !OpenAdatper12 spending ~1.5M cycles per second. Two threads in !recalloc spending ~256K cycles per second.
As shipped, this is no longer a single executable, it is a collection of 230 files, totaling 10.5MB, about half of which are bytewise duplicates, and another significant chunk have overlapping responsibilities, between the icon fonts and png icons.
"Software bloat is a process whereby successive versions of a computer program become perceptibly slower, use more memory, disk space or processing power, or have higher hardware requirements than the previous version, while making only dubious user-perceptible improvements or suffering from feature creep." -- https://en.wikipedia.org/wiki/Software_bloat
I'm not aware of any substantial new features. It uses a new renderer, but this does not produce a significant observable difference from the previous one beyond using more memory. It supports dark mode.
I was using a SAST program that used MS SQL Server and was generating reports, and often finding the reports took HOURS to generate, even when the report was only ~50 pages. A report on one specific project took over a DAY to generate a report. I thought it was ludicrous, so I logged onto the SQL server to investigate and found that one query was taking 99% of the time. This query was searching through a table with tens of millions of rows, but not indexed on the specific rows it was checking against, and many variations of the query were being used to generate the report. I added the index (Only took about an hour, IIRC), and what took hours now took a couple minutes.
I was always surprised the software didn't create that index to begin with.
FLTK, no accessibility features
WxWidgets, really limited theming, not even close to html+css. Cross platform compatibility is hit and miss, usually requiring a lot of one-off platform corrections.
Also, as I said, you don't need to ship the entire browser... not once, but twice... try reading slower.
I'm partial to the Windows 95-style interface so I jumped ship on NT 3.51 as soon as I could for NT 4.0.
Jumping to Windows 2000 was, likewise, an easy decision if only for not having to reboot after making IP address changes, and having USB and plug 'n play support.
Moving frm Windows 2000 to XP was less of a "no brainer". I continued to use Windows 2000 for quite awhile after XP came out. I skipped Vista entirely but Windows 7 was too nice not to jump on immediately. (It was the first MSFT OS I ran as a daily driver in beta, actually.)
It later turned out to be due to some Unicode handling in-built into a windows api they were using, while the developer’s version also not completely feature-complete. But both sides were sort of right.
If anyone thinks you need more redirects than this, I'd really like to know what more you think is necessary.
If you have a dozen layers and they each add 5% slowdown, okay that makes a CPU half as fast. That amount of slowdown is nothing for UI responsiveness. A modern core clocked at 300MHz would blow the pants off the 600MHz core that's responding instantly in the video, and then it would clock 10x higher when you turn off the artificial limiter. Those slight slowdowns of layered abstractions are not the real problem.
(Edit: And note that's not for a dozen layers total but for a dozen additional layers on top of what the NT code had.)
I think this is the key.
People fall in love with simple systems which solves a problem. Being totally in honey moon they want to use those systems for additional use cases. So they build stuff around it, atop of it, abstract the original system to allow growth for more complexity.
Then the system becomes too complex, so people might come up with a new idea. So they create a new simple system just to fix a part of the whole stack and allow migration from the old one. The party starts all over again.
But it's like hard drive fragmentation - at some point there are so many layers that it's basically impossible to recombine layers again. Everybody fears complexity already built.
For some reason even in fairly constrained subtrees it takes forever to find file names.
Yes, input latency back then was far better. But actually launching things came with appreciable pauses. Because disks back then _sucked_.
This was done also in Win 7. It didn't have such a performance hit, but it was the first thing to be disabled after installing windows.
Back then I would turn it off as I didn't find the search function that usable, and more than once I've had a "clean and build" process fail because some file was open and being indexed, and since Windows locks files on read, the build could not delete the file and just aborted. So, I turned it off.
Like beyond truly "dumb" tasks like downloading a file it's basically a guessing game how long it will take anyway, right? Say you split the whole loading bar into percentages based on the number of subtasks, suddenly you end up with a progress bar stuck on 89% for 90% of the total loading time.
Obviously you could post-hoc measure things and adjust it so each task was roughly "worth" as much as the time it took, but people rarely did that back in the day and my boss would get mad at me for wasting time with it today. Hence, spinners.
Right now, an M1 is pretty fast, but wait until all developers use it and it starts to become barely adequate.
Rephrasing Wirth's law: It takes slow hardware to develop fast software.
It's LGPL, lots of programs use it like qTorrent, VLC and much more. You can make up criticisms but it has been a backbone of GUIs for decades.
FLTK, no accessibility features
What exactly do you need and do you need it for every GUI you make? If you want a web page, use a web page.
WxWidgets, really limited theming,
Suddenly theming is your deal breaker.
not even close to html+css
Thankfully, because that is often not a good way to make a GUI.
as I said, you don't need to ship the entire browser...
No, you said "it doesn't need to use a local web server either." Also 'entire web browser or not' electron programs end up being hundreds of megabytes for a simple window use hundreds of megabytes of RAM.
The bottom line here is not that electron is necessary. It is that you want to use javascript even though your users will hate it.
Windows 2000 is my all-time favorite Windows OS. NT-based kernel, literally nothing extra, fast, stable. Used it til the day support was finally shuttered.
You don't need fast hardware to have snappy apps. Most of the microcontrolled devices you deal with day to day are clocked between 4 and 25MHz. Hard drives are only needed to access files once and load them into memory, and microcontrolled devices have like 256kB-4mB of memory. The only reason apps aren't snappy is programmers fail to make proper use of the hardware and OS.
Compare that to Mint which looks exactly the same as Windows and now runs 100% of Windows games and it’s just unfair. It’s like running candy crush in a barrel.
It also describes a very real thing.
While there are a lot of things that everyone would agree counts as "bloat", there are also areas of disagreement in the form of "one person's bloat is another's essential feature".
It’s going to be fascinating watching what happens when there’s nobody left capable of updating things like linux. It will be like a black box, people can maybe scrape together an electron app but they don’t go any deeper.
Especially due to the mechanical disk, everything seemed to take 2 to 10 times longer than it does today the first time it had to be done after a reboot, and if physical memory was exhausted and the OS started swapping, forget it, you might be staring at an hour glass for multiple minutes to open a web browser or something of the sort.
I love SSDs and multicore machines of today.
I now use a Surface Go3 i3 with 8GB. It's enough for just about everything I need. Web browser, running script language web apps, Java IDE, StarCraft 2. Disabling a bunch of stuff on Win11 makes a big difference. Whenever it felt slow I looked at Task Manager CPU and googled the process name, tried disabling it and only re-enable if necessary. Oh I also have a Peltier cooler+fan that cools the back of the unit when gaming to prevent throttling.
The PS/2 NT machine was top spec at the time. The Go3 is utilitarian now though should be like a supercomputer.
Don't forget what Notepad.exe truly is -- a testbed for new technologies. It's not "just a text editor" to Microsoft.
Windows 2000 (Server) was the best windows operating system ever made by far.
Yes, but OAuth has one major upside: HTTPS only.
No one wants to create site-to-site VPN networks to flow Kerberos.
The second is because authentication is per-device (and depending on the scenario, per-app). The token lifetime is configured by your IT department. Microsoft's default is 365 days, if I recall.
There's also Dtrace https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
You can do that with Dev Drive [0][1] which is currently on the Win 11 dev branch.
You can't do this for your boot volume, but you can do it for a [dynamically expanding] VHDX, secondary partition, or secondary volume. It will use ReFS (oddly enough, with 4 KiB clusters by default -- though it makes sense for the target scenario, unlike past uses of ReFS).
[0] https://learn.microsoft.com/en-us/windows/dev-drive/
[1] https://blogs.windows.com/windowsdeveloper/2023/06/01/dev-dr...
> You can't do this for your boot volume
How would this help with firing up all the built-in OS apps (Explorer, Notepad, etc.) being tested in the video?
A GUI toolkit that has no support for screen readers, or other assistive technologies that require accessibility APIs, should be a non-starter for most applications IMO. We need more options that meet that criterion without going all the way to a web page.
Except...there's no model of precisely how text is handled or how to re-encode it for e.g. screen reading...the development model lacks the abstraction to do layout...and so on. So we added a longer pipeline with more things to configure, over and over.
But - the computing environment is also different now. We can say, "aha, but OCR exists, GPT exists" and pursue a completely different way of presenting many of those features, where you leverage a higher grade of computing power to make the architectural line between the presentation layer and the database extremely short and weighted towards "user in control of their data and presentation". That still takes engineering and design, but the order of magnitude goes down, allowing complexity to bottleneck elsewhere.
That's the kind of conceptual leap computing has made a few times over history - at first the idea of having the computer itself compile your program from a textual form to machine instructions ("auto-coding") was novel and complex. Nowadays we expect our compilers to make coffee for us.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentica...
Sure. But now as a user, I get to see the glimpse of what's going out under the hood. Combined with other information, such as a log of installation steps (if you provide it), or the sounds made by the spinning rust drive, those old-school determinate progress bars were "leaking" huge amount of information, giving users both greater confidence and ability to solve their own problems. In many cases, you could guess the reason why that scrollbar is stuck on 89% indefinitely, just by ear, and then fix it.
Conversely, spinners and indeterminate progress bars deny users agency, and disenfranchise them. And it's just one case of many, which adds up to the sad irony of UI/UX field - it works hard to dumb down or hide everything about how computers work, and justifies it by claiming it's too difficult for people to understand. But how can they understand, how can they build a good mental model of computing, when the software does its best to hide or scramble anything that would reveal how the machine works?
And no, copying text hasn't been Ctrl+c. That sequence sends a sigint to the process, not a copy request. To copy text by default cmd made you enter a mark mode which then had you essentially draw a rectangle and it would then copy as newline characters even when it should have just continued the line. The old copying process of cmd is terrible.
You could argue that the common implementations are large piles of legacy C with questionable memory safety that could open them to exploitation by malicious actors, but that's an implementation detail rather than the protocol itself - and I believe there's at least one (mostly?) memory-safe implementation in Java called Apache Kerby.
There was Windows 3.2 in 1994 (not to be confused with "Win32", despite the name of the HTML file): http://toastytech.com/guis/win32.html
Yes the western versions of Windows at the time didn't include support for Chinese (etc.) language. But there is really no reason why they should - if the user's language can be represented in an 8 bit codepage, why should they have to pay any price in performance for something they will never use?
Conversely, would a Chinese-speaking user prefer an operating system designed to support all the other languages that exist, with an implementation that is likely not as specifically tailored to their requirements?
Just to make sure I'm not being one of those people: What AccessKit [1] has now, across Windows, macOS, and Linux, took roughly six person-months of work. We still need to support more widget types, especially list views, tables (closely related), and tree views, but we do already have text editing covered on Windows and macOS. Perhaps it helps that I'm an accessibility expert, especially on Windows. Anecdotally, it seems that implementing UIA from scratch is daunting for non-experts. But I guess in the big picture it's really not that hard.
If you get bad performance in a game nowadays it's a good idea to try proper fullscreen. Alt tabbing might be slow, but the game will run better.
My experiences with NixOS show me that they are. What do I mean by that? I am forced (MSFT Intune) to use Ubuntu for work, and was using MacOS prior to that. Both took a good heft of time to boot up, especially compared to the WinNT example. They are general purpose and come with everything under the sun installed in-case the user needs it. In the latter case (MacOS) your hands are also pretty tied when it comes to slimming it down (to be clear, apps are easy to remove, but not system cruft).
The slowest parts of bootup on my personal PC (NixOS) are POST and the GRUB timeout. NixOS takes less time than either (< 2 seconds). I chalk that up to NixOS installing very little more than I tell it to.
I agree that WebApps make situation significantly worse, but the OS itself is full of garbage that does eat CPU cycles and IOPs.
Almost like some kind of... web OS?
To think we almost had it, but Palm made some bad decisions back in 2009 and the dream of app-as-browser + Node.JS + consistent application styling and syscalls through a provided JS framework (Enyo) is, sadly, probably dead forever.
For me, I use google so often that I can rapidly parse information without really needing to read much of the text, the link just sort of 'looks' right. I've observed my wife reading google results and she is much slower and more methodical, probably because she doesn't google things 20+ times a day every day like I do.
That's how I end up misclicking, because i'm not working at the speed of a normal googler.
It is really annoying though, there are some css tweaks you can make using browser extensions to make that disappear if you're so inclined.
On one project, I actually shifted quite recently from working on old-school, pre-Windows XP, DCOM-based protocols, to interfacing with REST APIs. Let me tell you this: compared to OpenAPI tooling, DCOM is a paradise.
I have no first clue how anyone does anything with OpenAPI. Just about every tool I tried to turn OpenAPI specs into C or C++ code, or documentation, is horribly broken. And this isn't me "holding it wrong" - in each case, I found GitHub issues about those same failures, submitted months or years ago, and still open, on supposedly widely-used and actively maintained projects...
"We" just assume that anyone who has already signed up will always be signed in.
Microsoft has been trying to migrate Windows development to a managed language for over 20 years; their first attempt at this was a complete disaster and NT 6.0 (Vista) would ultimately be developed the old way.
It's only really been in the last 5-7 years, with Windows 10 and 11, that MS has managed to get their wish as far as UI elements go, which is why the taskbar doesn't react immediately when you click on it any more and has weird bugs that it didn't have before.
I thought that the NT Kernel was heavily based on VMS. When Dave Cutler, their chief OS architect/guru left for Microsoft and took a bunch of engineers with him. FTA:
"Why the Fastest Chip Didn't Win" (Business Week, April 28, 1997) states that when Digital engineers noticed the similarities between VMS and NT, they brought their observations to senior management. Rather than suing, Digital cut a deal with Microsoft. In the summer of 1995, Digital announced Affinity for OpenVMS, a program that required Microsoft to help train Digital NT technicians, help promote NT and Open-VMS as two pieces of a three-tiered client/server networking solution, and promise to maintain NT support for the Alpha processor. Microsoft also paid Digital between 65 million and 100 million dollars."
[0] https://www.itprotoday.com/windows-client/windows-nt-and-vms...
Unfortunately I cannot think of a single thing that has gotten better about Spotify since I started using it, and a lot which has gotten worse.
The early 00’s “open standard” of web forum + eMule + VLC would still be light years ahead of Netflix&co. if it weren’t for how hard it’s been gutted by governments, copyright lobbies, ISPs and device/platform vendors through the years. Heck, the modern equivalent often still is (despite all the extra hoops), unless you are trying to watch the latest popular show in English.
Another one that can be slow with file dialogs is that sometimes (maybe it has been fixed now) it will try to query whether a networked drive is around on another computer. If it isn't then the call to it can be blocking your file UI.
A third problem I've noticed with file selection dialogs and explorer is that the My Computer 'folder' that contains your disks takes a long time to load. Much longer than any sub-folders on any of the drives.
I think the problem is largely with explorer.exe. If I browse those folders in a web browser the experience is snappy.
The only exceptions are: 1) the actual build, which is faster on the modern machine, but only for a large number of source files and 2) reading and writing files - a floppy disk cannot beat an nvme drive of course.
These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.
--
[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.
I recently spent a hair-pulling near-week encountering then trying to figure out the DLL search path in Win32 because a library was messing around with it. Documentation is straightforward enough, I suppose, until you get to SetDefaultDllDirectories, which has this in its discussion: "It is not possible to revert to the standard DLL search path or remove any directory specified with SetDefaultDllDirectories from the search path."
I guess my point is, environment variables can be tricky.
[0]https://winworldpc.com/product/windows-nt-2000/final
[1]https://answers.microsoft.com/en-us/windows/forum/all/winwor...
[2]https://forum.winworldpc.com/discussion/6236/is-this-site-il...
Of course there are plenty of new features, but I don't use them.
It's common enough that there were a couple browser proposals to deal with this and would address the Ctrl+F issue. I believe this has been merged into the CSS Containment spec, but at the moment it doesn't make windowing obsolete in every situation.
Sidenote, look up the ThinkPad 760 series, it had some really cool features. Raise-able keyboard so you could have it at an angle, physical sliders for sound and backlight, easy access to the internals and hot-swappable battery.
> Building interactive programs based on HTML or Logo (if anybody does remember)?
Hold my beer: my Github Actions CI scripts use Logo to generate the bash build scripts as images that are then OCRed and executed by a special terminal that exploits the Turing complete nature of Typescript's type system.
Turtles all the way down!
Tools like that are basically hit-and-run and they don't need to stick around to do lasting damage.
Also impressive how our brain is sensitive to all of this.. no matter how impressive a current OS / web browser / jitted js is.. I dearly miss the past eras "behavior".
A month ago I realized that my RAM was topping at 95% capacity when running the usual suspects (vscode, edge, etc). I bought an additional 16Gb, and I'm already at 55%.
Now, RAM optimizations aside, what the hell...
I blame us, or the current generation of us. Performance (CPU and RAM) is so cheap that I feel we don't optimize anything anymore. Many of my colleagues don't have a clue on how to run a profiler, let alone actually optimize a system.
And we are always turning to more productive yet less performant platforms such as nodejs etc. It's a valuable trade off, but maybe we have gone all to an extreme...
NT might be more stable but it was also much slower. DOS applications on 9x actually ran in a VM with hardware passthrough, whereas NT emulated much of the hardware via NTVDM. Interacting with something as simple as the EDIT text editor in a window on 2K/XP is noticeably slower than on 9x.
That was about five years ago. :/
I have a 2015 MacBook Air I abandoned recently for being so painfully slow to use that I had almost not touched it for months. I have an iPad Air 2 that is basically unusable at this point. Both are 2-3 orders of magnitudes faster than those old computers that work instantly.
But windows and web apps are super slow now too
Think of all the landfills and wasted work hours earning the money needed to fill those landfills. The heavy and rare metals
This is proof that if computers were 10x faster, they would run slower than ours today, because in the past we’ve seen that be true over and over. The software companies will just make heavier and heavier programs and operating systems until we have gained nothing but a significant amount of co2 emissions
What a weird claim. If the new apps aren’t doing anything more, then just use the old apps.
Except you’ll quickly find that the old apps are quite simple and limited relative to what we have today.
Not at all what I expected from a news site. They're usually full of crap and dog slow.
This is the specific URL I experienced this with. Though the whole site seems mostly very quick. https://www.bbc.com/news/live/world-us-canada-65967464
And this is on a 2017 MBP, which some sites are really slow. Nothing crazy like the new silicon CPUs here either.
I have a Windows 10 VM I use for some testing and such, and all these background things keep using up huge mount of resources, no matter what knobs I turn and regedit levers I pull I just can't get it to stop.
For comparison, I also have a macOS VM which certainly isn't fast, but nothing like the Windows one. And the BSD and illumos VMs work basically fine (although in fairness they also don't start X11; but I do just ssh in to all of these machines and never use the GUI for anything).
I think you're mixing up Windows 2000 and ME? ME was a rushed update of 98 because Microsoft felt "they should release something" for the Millennium. It was a dumpster fire. Windows 2000 was the continuation of Windows NT, and became the basis for XP and everything that followed.
As for performance, by the time Windows 2000 came out (Pentium 3 era machines) it didn't seem to matter that much any more, and it really was a lot more stable.
If you activate the start menu with a keypress it's going to grab focus. Before it grabs focus the previous window in focus will get events. The same applies with panels (drawers? I forget Windows' name for them) in the Start Menu. There's a non-zero time between activation and grabbing focus to receive keypress events.
Everything from animation delays to stupid enumeration bugs can affect a windows not grabbing focus to receive keypress events. Scripting a UI always has challenges with timing like this.
A mainframe terminal has a single input context. You can fire off a bunch of events quickly because there's no real opportunity for another process (on that terminal) to grab focus and receive those events.
Note the above doesn't absolve Windows of any stupid performance/UX problems with bad animation timings and general shittiness. Microsoft has been focusing on delivering ads and returning telemetry with Windows instead of fixing UX and performance issues.
I remember going to a friends house and using their computer, and it took several minutes to boot, and even after it reached the desktop it still took more time for things to become responsive. Opening any program took at least 10 seconds, possibly more.
Those old HDDs could only reach low double digit IOPS, so opening a program would cause the entire system to become unresponsive until it was loaded!
Modern SSDs are massively faster, and stay fast even when heavily used! Some of the modern SSDs are even faster when lots of operations are queued up!
Adobe, for example, has a ton of common libraries that load at startup.
I notice that I get completely different results on my home and work machines doing the "start button, type" search. for "Downloads", expecting C:\Users\Username\Downloads, the home machine figures it out after three characters. The work machine seems to have decided that I clearly want "File Explorer, not any particular directory" and "Change how I download updates in spite of it being a corporate-managed box where I probably can't push that button without asking IT to remote in and do so" are what I want, even when I spot it the whole directory name.
Using windows 10+ is infuriating with all the ui latency.
Once Windows 8.0 came out, everything went to shit. It takes several seconds to open anything now.
I recently fired up a Windows 7 VM to interface with some old hardware, and was shocked that everything within the VM opening instantly, exactly how I remembered it.
P.S. The lag is still present on my m1 mac in macOS compared with Windows 7 + SSD. It's a lot better on my Manjaro Linux desktop with KDE, but not quite as instantaneous as Windows 7 was.
Experimental behavior manipulation, without even telling the subject they are part of a manipulation experiment? You would be chased out of the room and your reputation destroyed! Utterly unacceptable. But in webdev universe this is somehow seen as a totally normal practice.
Instead we get apple, google, microsoft, and gnome/qt all doing their incompatible thing.
So until they do, expect the beatings to continue. /shrugs
Notepad2 is my all-time favorite though. It supports key features like line numbers and directionless search, but is much closer to stock than Notepad++. [0]
Then head to the Newswaffle link and input https://bbc.com or just scroll down the page to head to the converted site.
Since the aggregate GHz and RAM on offer is more than 25x the minimum spec for windows 10.
Win10 min spec is 1GHz w/ 2GB of RAM - my machine is more than one hundred times faster, yet, everything TFA says is true.
Sure you had splash screens, the sheer fact that you could open a spreeadsheet amd make some calculations (often with automatic calculation disabled and thus pressing F9 manually to re-calculate) was (IMHO) a miracle in Windows 3.x times.
This is a pet peeve of mine but today developers should be (only for testing their programs) be given the lowest powered machines available, connected to the same (shitty) internet connection a large part of the future users of the programs actually experience and see directly why their programs/tools/websites/whatever are laggish/slowish for their customers.
NEVER FORGET Wirth's Law:
https://en.wikipedia.org/wiki/Wirth's_law
> Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
My main suspicion is that OS have some kind of agreement with hardware vendors under the excuse of "it's new innovation which requires more CPU power". It's a hand in hand system where you need new software for the new hardware AND VICE VERSA.
It is veeeery hard to prove it, but it's similar to house appliances using designs and parts that wear out easily. Capitalism and growth DICTATES that it must sell, so it's impossible for a company to survive if it produces durable stuff. It's economically impossible to compete.
It's how the whole industry works (part manufacturing, etc), so it's not possible for a company to go against it. Capitalism cannot allow it.
NEVER FORGET Wirth's Law:
https://en.wikipedia.org/wiki/Wirth's_law
> Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
My main suspicion is that OS have some kind of agreement with hardware vendors under the excuse of "it's new innovation which requires more CPU power". It's a hand in hand system where you need new software for the new hardware AND VICE VERSA.
It is veeeery hard to prove it, but it's similar to house appliances using designs and parts that wear out easily. Capitalism and growth DICTATES that it must sell, so it's impossible for a company to survive if it produces durable stuff. It's economically impossible to compete.
It's how the whole industry works (part manufacturing, etc), so it's not possible for a company to go against it. Capitalism cannot allow it.
How many applications were tested?
I can run Windows 95 applications at better than era-appropriate speed, in an x86 emulator written in javascript running on a web browser. That's at least 3 layers of virtual machine abstraction and the applications are still faster.
So if you're saying "the comparison isn't fair because modern software is too shit to hold up", then I agree, but if you're trying to tell me there is something else inherent to modern computing that makes software so many orders of magnitude slower, than I request that you show data to support that claim.
I haven't used desktop Linux recently, but my expectation would be that e.g. gnome is a little better than Windows and macOS, but still not great compared to what we had 20 years ago.
So there's just something broadly bloated about modern software. And, sure, people largely don't notice because modern hardware is so damn fast, but we're also forcing users to buy hardware that is much faster than what they should need.
GP points out that faster CPUs allow us to do things that weren't possible in the old days, like real-time video transcoding, and I agree that's great! But if you just need to check your damn email, you should be able to save a great deal of money with a machine that's laughably underpowered by "modern" standards, while still checking your email at the speed of light.
What is all this software doing? I guess we're e.g. rendering at higher resolutions than we used to, but isn't the GPU supposed to take care of that?
You just describe modern Web Development.
What’s Microsoft’s excuse? Tired of their incompetence being handwaved and a several decades old jank non-solution being held up as acceptable. MS needs more accountability in their teams.
Give it some time for the industry to finally mature.
I like websockets for the same reason. Each message has a two byte overhead compared to TCP. Two bytes. Unfortunately messages sent by the client have a whopping four additional bytes to help protect buggy middleboxes.
It's extremely fast. Super duper fast. And a quick look at the network debugging tab shows why: it loads the shop's entire catalog data (about 3 megs) upfront, and the entire application runs locally with not a single request until you buy something. Now that's efficiency.
Really. Go to their website, clock on KATALOG and click some random buttons, pick a product at random, add it to your cart, remove it from your cart.
The product images are the only things that aren't pre-loaded.
> 5. Hardware accelerated GUI initiation vs 'dump everything to frame buffer in kernel GUI32 library'
Even on my fast workstation, this seems to account for most of the percieved startup latency in modern, well-written GUI applications. The Nvidia drivers seem to do this stupid signature check every time a d3d11 device is created..
According to an external wall clock the keypress events happen at seconds 1, 2, and 3. The first press triggers a window to appear (menu panels are a type of window). It takes 0.5s to instantiate and register to receive keypress events from the shell. Wall clock time is 1.5s. Nice.
The second window (menu panel) receives a keypress event at wall clock 2s which opens a third panel. That panel because it has more complicated drawing and page faulted so had to fetch a page from disk swap unfortunately took 1.2s to register for focus. A keypress triggered at wall clock time of 3s. Our third panel though didn't register focus until wall clock 3.2s. That keypress went to panel 2 because that had focus when the keypress event triggered. All times greatly exaggerated.
The shell needs to add events to processes' event queues but it can't just arbitrarily add them to every process. It also can't know any individual window wants events until the process tells it so. Unlike mouse events a keypress event doesn't have coordinates so a process can't really figure out the intended target of an event.
A model that prevents preemption means your back to the Win16 cooperative multitasking. A process can't be interrupted until it gives up the CPU willingly. That however means background processes can't do work while a foreground process holds the CPU. If you make just your shell and GUI apps cooperative the responsiveness of the system will end up awful.
Thankfully we’re largely past that.
That said, I rarely see malware on most of the machines I touch. I get more calls about automatically fullscreened browser windows with scare text and a phone number to call than any actual software problems.
Defender does work well enough for any average person and I’m happy if only because the vast majority of AV software is sold in the most disgusting way. Just as bad as the malware scare tactics honestly.
1. send in plain text with http basic auth. Over https this isn't a problem, but https was expensive). This is sent on every request. 2. Use digest. This is also sent on every request, and also requires actual processing, at which point you might as well go for 4 so it looks nice. 3. Use certificates. Nobody does this on the pubic web. The only website I've ever used certificates was whatever certificate site predated let's encrypt, can't remember the name at the moment, and as someone who doesn't use client certificates it was a huge pain (blame that on the browsers though) 4. Use a form on the website with a session token, and you get control over the UI including error messages and styling. Much more user-friendly. You can trivially prevent the user from (easily) sending requests with pain text passwords by only showing sensitive pages like login over https. The user can't bookmark or share a URL with a password embedded in it. You can request more information than just username and password (Bank: do you want to see your checking account or savings account? Forum: go back to previous page or to homepage? SSO-ish (DayForce): what's the name of the org you're signing into?)
Not everyone wants to stay logged in, and not everyone uses a single browser; I occasionally use the wrong browser profile for something because I cbf loading up the correct one; in these cases I usually load the website in a private browsing tab to avoid container/addon settings interfering. When I can't log in easily, I get quite annoyed.
What is worse than that: not being able to predict if inputs will be buffered or dropped during unresponsiveness. I kind of look like an idiot when I keep clicking/typing away on someone else’s computer while things are frozen, thinking “oh, it’ll catch up in a bit”, and then 5 seconds later I have to work harder to fix the chaos: some keystrokes at the beginning made it through, then only every other one, then the next couple hundred got dropped, but the next 100 came through fine, and interspersed everywhere there are bizarre runs of duplicated keys, as if I had held the letter aaaaaaaaaaaaaa down continuously.
Everyone’s mix of hardware, OS, text editor, text editor plugins, etc makes this behavior highly variable, and hard to guess if it makes sense to keep typing or just wait out the frequent 1-5 second lockups.
And I suppose the margin is too narrow to contain it?
(Care to share?)
1GB programs are rarely instant but that's usually just the price for very complex functionality if it's too interconnected to load parts of it on demand.
At one point back in school a friend said to me "hey, I can't figure out how to install and boot JVM on Virtual Box. I need to use it for homework in another class. Help me?"
I wish I had been able to explain it as succinctly as you. Instead I sat there laughing in the guy's face for a good minute, eventually realizing from his expression that he was being serious, which only made me laugh even harder.
Seeing as they were posting the backlash on Reddit, I'm guessing a lot of people downloaded the app to log in and Reddit said "Big Success!" when they checked the stats.
In such an OS the APIs would allow you to atomically transfer focus as part of other operations, for example, starting a new program or opening a new window could simply transfer focus atomically to the pending new program/window such that the OS buffers keystrokes until the recipient is ready to receive them. Also, taking focus would require you to advertise what keys you're willing to receive, allowing a focus cascade such that there's never any situation in which keystrokes get delivered to a UI that isn't able to do anything with them. At the top level of the shell there'd be a kind of global command line or palette that is the default receiver of all other unhandled keystrokes. Because focus transfer is always deterministic under this scheme, people can learn where keystrokes will go without timing playing a part.
The GDPR's notion of informed consent really needs to be applied pervasively to all kinds of consumer contracts. If it's hidden in walls of text that the average user doesn't read it shouldn't count as consent.
This isn't an argument for not trying though.
This is not an inherent limitation of CPUs but a part of Windows' exclusive fullscreen concept. Just another thing that was simply accepted as the way things are instead of being improved (until exclusive fullscreen went out of style).
So, you've failed to meet the requirements from the start.
I've also said, many times now, that you can use browser tech without an entire browser and the answer doesn't need to be electron.
In my use case, impact on inserts was not noticed. I did notice higher disk space usage, but it absolutely was worth it. Spending $200 on a larger disk was absolutely worth saving literally days on report generation.
Again, not that I'm advocating for Electron specifically, and haven't been. I've specifically mentioned Tauri and others as alternatives that use the system's browser engine, which you have repeatedly ignored.
I'm quite impressed of both .NET and OpenJDK in some metrics, but it is often not resource efficient, which is something I do value.
One example of an application that works as I would expect others to do is MuPDF, Being able to open 20MB+ PDFs in 1/10 seconds on a 10 years+ old laptop.
By the way, does anyone know why Debian launches LibreOffice so much quicker than Ubuntu, Fedora or Archlinux (or any other distro I've tested with)? In Debian its 1-2 seconds, and the others 5-10 seconds. I mean it could be included extensions or how they are configured, but I'm honestly interested.
What does this mean? What specifically do you think is being prevented?
The only other significant advantage is they are easier to jail/isolate for use with the likes of appImage, Snap and Flatpak/Flathub.
How is a program with hundreds of megabytes of dependencies easier than a single small statically compiled binary?
Again, not that I'm advocating for Electron specifically,
This thread was about people using electron even though users hate it.
I am interested in your fast loading techniques in general. I also am considering making a bunch of personal / pro websites where I'll use a static generator. Just looking for some inspiration and ideas to steal I suppose.
Since it's kinda fresh pursuit for me, I am still looking to gather some links and do proper research. I wasn't looking to deanonymize you, my apologies.
As far as my inspiration, I use Craigslist and google as my inspiration. I try to get a sleek and simple look like google pages, but maintain the “old school” functionality and layout ideas of Craigslist .
As far as actual development is concerned, I use oracle ARM servers that are grossly overpowered for a webhost, cloudflare nameservers OR CDN, and I keep as much of the development as possible server side with as little JavaScript as I can. An example was a simple blogging system I made. The entire system is set up with a MariaDB table that has “title, date, image url, and content” as the data bits for it, and then each function works on one of two pages: a backend using php session that has all of the functions with get request, and a front end that has all of its content on a page with post requests. There is no JavaScript involved on either page, which means the stuff transmitted over the internet is lesser, the stuff done on the client computer is lesser, and the number of outside calls is lesser. This does make it “less responsive”, but does the blog really need image zoom in on hover and shit like that?
I have found the best way to develop for speed and simplicity is to curb the enthusiasm of the client from “looks as good as possible” to “simple, cheap, fast, and robust, while still looking better than average”
The final suggestion I have is to develop with security AND accessibility in mind first. If you want to put aria links of all of your stuff, it is much harder to go back later and 1) determine what the link does, and 2) write an aria for it than it is to just include it in the first place. Always follow proper form for mitigating risks like SQL injection and XSS, and do as much as possible on the server before you resort to JS.
If you are looking for a couple of sites that I didn’t build, but get the point of what I am trying to do across, check out
Smashingmagazine.com Hacker news doesn’t look the best, but it follows the logic set forth, Openai.com (this one surprised me because if you remove a lot of the slightly more interactive elements it is fast as hell)
If you have any specific questions, ask away and I’ll do my best
Figma contributes by enabling UI designers to easily author interfaces which look allegedly beautiful but are complex to build, test and maintain.
And the resources burned on building such esthetically pleasant piles of barely usable software could find better use on making it simpler, faster and more focused on user actual functional and non-functional requirements (much of them taking place on the server-side) instead of sugaring their eyes by throwing tons of code on their clients
And it seem to be the start up process that differs, as putting them all in a ram-disk does not alleviate the issue, and restarting the app cuts the time in ~1/2 but equally for each distro.
My guess, as I said first is what default libraries as loaded, and possibly how they are configured. I do however find it strange that this has not been mentioned elsewhere as I've been struck bu this difference for years, when I happen to load a pure Debian install (not what I usually use).
There was a time when a virtual function call was a lot of overhead
Even having a VMT is overhead.
Sometimes the COM interface is implemented as actual interface, where the implementing class is derived from another class and the interface. (in C++ the interface is just another class with multiple inheritance, but other languages have designed interfaces). Then the class even needs to have two VMTs.
Multiple VMTs have even more overhead. And with multiple VMTs, it is not just a method call. In the functions, this always points to the first VMT. But when a function from the VMT is called, the pointer points to that VMT. So the compiler creates a wrapper function, that adjusts this and calls the actual function.
when methods from the later VMTs are called , this points (non-virtual thunk)