The fact remains that proprietary software seem to be a license to print money, and with the ability to print money is the ability to entrench said software, such as heavy investment in software development.
This is a uphill battle for open source software, but it seems that open source software tend to keep improving in the long run.
It seems like proprietary software has a big advantage when there's still lots of room for improvement, so that investment in further development pays off.
However, in some cases, there's little room for real improvement, and the thing becomes a "solved problem" for the most part. At this point, the proprietary companies then start enshittifying the software (look at Photoshop, now a cloud-based program) to try to extract more money from users, and to keep users on the upgrade treadmill. Part of this is probably from the way corporate politics work: when you're a manager with a team in place, you need something to keep that team busy, to justify your team's continued employment to upper management, so when you run out of useful improvements to make, you invent less-than-useful "improvements" (e.g., ugly new UIs).
This is where FOSS seems to be able to do well: it doesn't need to waste resources on useless improvements (we'll ignore GNOME for a moment as an exception), and can just focus on delivering the necessary functionality. Once that's done, it can just go into maintenance mode and people can move on to other projects. Until then, it can continue to improve and attract users to switch from the enshittifying proprietary stuff.
FOSS only does well when it isn't the main source of income for contributors.
GNOME I'll heartily agree on. But I've never liked GNOME myself (v1 seemed OK though) and have always criticized that project. I've always been a KDE fan, though admittedly they've had some similar problems and have handled some major changes rather poorly (esp. 4.0). Really, the desktop environments are one place where FOSS hasn't really done that well, frequently re-inventing the wheel for questionable reasons, or adopting questionable new design philosophies. At least there's a fair amount of competition in the space, so if one project rubs you the wrong way, you can try another; this proliferation of competitors also shows just how much disagreement there is in the community on this particular matter.
I really don't know enough about the audio stack changes; I can only guess that they found big problems with older solutions there which required major architectural changes, esp. for certain high-performance applications.
It is to be expected that there is a proliferation of different UIs if there is the possibility of making different UIs run on the same OS. The reason is that everybody has a different workflow, different aesthetic preferences, different ways of doing things (e.g.: leaning more toward keyboard, mouse, touchpad, touchscreen). The best thing I remember in this space were the dozen or so X11 windows managers of the 90s. The worst one is the single UIs for Windows and Macs. There are some ways to customize them but not as heavily as it is possible with Linux.
Wayland seems to restrict what's possible to customize but I might be wrong because I have no direct experience. I'm still of X11 because of an old NVIDIA card and frankly I could stay on X11 forever. I know that I'll have to switch sooner or later because of software compatibility. I'd hate to have to buy a new laptop only to be able to run Wayland. It's still good enough to work with me for my customers.
Don't know much about the architecture about wayland but I think grahic driver handling changed in wayland too.