Years ago, before the Internet, software was expected to be rather buggy. It took a long time to research and fix. Releasing software was expensive, you had to put it on physical media and ship it. QA was critical role, it was way less expensive to test the hell out it and fix it than it was to ship new code. The idea that computers could be trusted to perform tasks well was easily shattered when things went wrong, and the additional optics of losing customer trust went a long way into driving QA.
Fast forward to today, and it's way cheaper to ship code with bugs to prove out an idea works than it is to spend even a few minutes writing test cases and doing even a modicum of QA.
Ultimately, it's a not just a combination of all 3 things you've mentioned, which are all contributing factors; the real problem is any level of QA before proving an idea is seen as a waste of time and money. As someone who started in tech support & QA 30 years ago, it's really tough to see.