[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
Nobody I've worked with can ever quantify the ROI for elaborate take test environments, but somebody made an okr so there you go. Far be it we follow actual research done on modern software... http://dora.dev
From a QA perspective, I greatly regret that the world of infrequent releases is mostly gone. There are few kinds of products that still hold onto the old strategy, but this is a dying art.
I see the world of services with DevOps, push on green etc. strategies as a kind of fast-food of software development. A kind of way of doing things that allows one to borrow from the future self by promising to improve the quality eventually, but charging for that future improved quality today.
There are products where speeding the rollout is a bad idea. Anything that requires high reliability is in that category. And the reason is that highly reliable systems need to collect mileage before being released. For example, in storage products, it's typical to have systems run for few months before they are "cleared for release". Of course, it's possible to continue development during this time, but it's a dangerous time for development because at any moment the system can be sent back to developers, and they would have to incorporate their more recent changes into the patched system when they restart the development process. I.e. a lot of development effort can be potentially wasted before the system is sent out to QA and the actual release. And, to amortize this waste, it's better to release less frequently. It's also better to approach the time when the system is handed to QA with a system already well-tested, as this will minimize the back-and-forth between the QA and the development -- and that's the problem shift-left was intended to solve.
NB. Here's another, perhaps novel thought for the "push on green" people. Once it was considered a bad idea for the QA to be aware of implementation details. Testing was seen as an experiment, where the QA were the test subjects. This also meant that exposing the QA to the internal details of the systems or the rationale that went into building it would "spoil" the results. In such a world, allowing the QA to see a half-baked system would be equivalent to exposing them to the details of the system's implementation, thus undermining their testing effort. The QA were supposed to receive the technical documentation for the system and work from it, trying to use the system as documented.
Start paying "QA" more than their dev partners consistently and with better promotion opportunities and you can get better testing, but everybody seems to be making plenty of money without it.