[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
Other examples:
* Replacing automated tests with (quicker) type checking and running it on a git commit hook instead of CI.
* Replacing slower tests with faster tests.
* Running tests before merging a PR instead of after.
* Replacing a suite of manual tests with automation tests.
etc.
- implementing security features earlier (DevSecOps)
- implement tracing and metrics/analysis tools earlier, use them to test and debug apps earlier (as opposed to laptop-based solutions)
- building the reliable production model earlier (don't start with a toy model on your laptop if you're gonna end up with an RDS instance in AWS; build the big production thing first, and use it early on)
- add synthetic end-to-end tests early on
The linked article is talking about Shift Left in the context of developing semiconductors, so you can see how it can be applied to anything. Just do the needed thing earlier, in order to iterate faster, improve quality, reduce cost, ship faster.
Nobody I've worked with can ever quantify the ROI for elaborate take test environments, but somebody made an okr so there you go. Far be it we follow actual research done on modern software... http://dora.dev
I'm firmly of the opinion that if a test can't be run completely locally then it shouldn't be run. These test environments can be super fragile. They often rely on a symphony of teams ensuring everything is in a good state all the time. But, what happens more often than not, is one team somewhere deploys a broken version of their software to the test environment (because, of course they do) in order to run their fleet of e2e tests. That invariably ends up blowing up the rest of the org depending on that broken software and heaven help you if the person that deployed did it at 5pm and is gone on vacation.
This rippling failure mode happens because it's easier to write e2e tests which depend on a functional environment than it is to write and maintain mock services and mock data. Yet the mock services and data are precisely what you need to ensure someone doesn't screw up the test environment in the first place.
Obviously this is for large-scale systems and not small teams.
Personally I think the real issue is not the testing strategy but the system itself. Many organizations make systems overly complex. A well structured monolith with a few supporting services is usually easy to test while micro service/SOA hell is not.
I heard a story decades ago about a software team that got a new member transferred in from the IC design department. The new engineer checked in essentially zero bugs. The manager asked what the secret was, and the new engineer said “wait, we’re allowed to have bugs?”
It baffles me that anyone would continue to promulgate the Pressman numbers (which claim ~exponential growth in cost) based on... it's not entirely clear what data, as opposed to Boehm's paper which only claims a linear relative cost increase, but is far more credible.
From a QA perspective, I greatly regret that the world of infrequent releases is mostly gone. There are few kinds of products that still hold onto the old strategy, but this is a dying art.
I see the world of services with DevOps, push on green etc. strategies as a kind of fast-food of software development. A kind of way of doing things that allows one to borrow from the future self by promising to improve the quality eventually, but charging for that future improved quality today.
There are products where speeding the rollout is a bad idea. Anything that requires high reliability is in that category. And the reason is that highly reliable systems need to collect mileage before being released. For example, in storage products, it's typical to have systems run for few months before they are "cleared for release". Of course, it's possible to continue development during this time, but it's a dangerous time for development because at any moment the system can be sent back to developers, and they would have to incorporate their more recent changes into the patched system when they restart the development process. I.e. a lot of development effort can be potentially wasted before the system is sent out to QA and the actual release. And, to amortize this waste, it's better to release less frequently. It's also better to approach the time when the system is handed to QA with a system already well-tested, as this will minimize the back-and-forth between the QA and the development -- and that's the problem shift-left was intended to solve.
NB. Here's another, perhaps novel thought for the "push on green" people. Once it was considered a bad idea for the QA to be aware of implementation details. Testing was seen as an experiment, where the QA were the test subjects. This also meant that exposing the QA to the internal details of the systems or the rationale that went into building it would "spoil" the results. In such a world, allowing the QA to see a half-baked system would be equivalent to exposing them to the details of the system's implementation, thus undermining their testing effort. The QA were supposed to receive the technical documentation for the system and work from it, trying to use the system as documented.
There's no need to write tests upfront for you to shift left. All shift left means is that testing happens during development. Whether you start by writing tests and then write the actual program or the other way around -- doesn't matter.
So you make a tool that prevents errors and speeds up a process, now people use it four times as much, and now they wonder why they're only seeing half as many faults instead of an order of magnitude less.
We are humans. We cannot eliminate errors, we can only replace them with a smaller number of different errors. Or as in your case, a larger number of them.
The classic example is driver development. No one, even today, sits down and writes e.g. a Linux driver until after first silicon has reached the software developers. Early software is all test stuff, and tends to be written to hardware specs and doesn't understand the way it'll be integrated. So inevitably your driver team can't deliver on time because they need to work around nonsense.
And it feeds back. The nonsense currently being discovered by the driver folks doesn't get back to the hardware team in time to fix it, because they're already taping out the next version. So nonsense persists in version after version.
Shift left is about trying to fix that by bringing the later stages of development on board earlier.
One nice side effect of tying the mnemonic to reading direction rather than homonyms is that it carries over across languages better (though still imperfectly).
In a waterfall, single-deliverable model it wouldn't surprise me there is some increase in costs the later a bug is discovered, but if you're in that world you have more obvious problems to tackle.
People still use the Pressman numbers. So much for "data-driven decision making"...
Start paying "QA" more than their dev partners consistently and with better promotion opportunities and you can get better testing, but everybody seems to be making plenty of money without it.
But with chip design, they can’t iterate that fast and performance is more important, so they are doing more design and testing before the expensive part, using increasingly elaborate simulations.
Maybe everything about those concepts are just wrong? I mean, you can have people like Uncle Bob who’ll tell you that “you just got it wrong”. He’s also always correct in this, but if so many teams “get it wrong” then maybe things like clean, xp and so on simply suck?
Isn't ignoring the early steps that could save time later also known as false economy?
We used simulators for hardware. Using parts of existing hardware when possible and hacked Microsoft sw emulation to behave as our hardware would.
When the hardware came back, it was less than a day to get the driver running.
It's only a false economy if they are the correct steps. If it turns out that they are wrong, it's a very real one.
It's worth noting that left-leaning systems were widely used once too, and they had about half of the overall success rate of the right-leaning ones on those criteria.
From my anecdotal experience YAGNI and making sure it’s easy to delete everything is the only way to build lasting maintainability in software. All those fancy methods like SOLID, DRY, XP are basically just invitations for building complexity you’ll never actually need. Not that you can really say that something like XP is all wrong, nothing about it is bad. It’s just that nothing about it is good without extreme moderation either.
I guess it depends on where you work. If it’s for customers or a business, then I think you should just get things out there. Running into scalability issues is good, it means you’ve made it further than 95% of all software projects.
I have always called this "front loading" and it's a concept that's been around for decades. Front loading almost always reduces development time and increases quality, but to many devs, it feels like time-wasting.
From the perspective of a software user, I greatly regret the same thing. I really think that rapid/continuous release has done more harm than good.