←back to thread

172 points yatrios | 3 comments | | HN request time: 0s | source
Show context
0xbadcafebee ◴[] No.42184298[source]
For those not aware, Shift Left[1] is (at this point) an old term that was coined for a specific use case, but now refers to a general concept. The concept is that, if you do needed things earlier in a product cycle, it will end up reducing your expense and time in the long run, even if it seems like it's taking longer for you to "get somewhere" earlier on. I think this[2] article is a good no-nonsense explainer for "Why Shift Left?".

[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...

replies(12): >>42185611 #>>42186878 #>>42187331 #>>42187375 #>>42187393 #>>42187419 #>>42187454 #>>42187463 #>>42187501 #>>42188834 #>>42192801 #>>42194403 #
coryrc ◴[] No.42186878[source]
No evidence most of the activities actually save money with modern ways of delivering software (or even ancient ways of delivering software; I looked back and the IBM study showing increasing costs for finding bugs later in the pipeline was actually made up data!)
replies(4): >>42186930 #>>42187047 #>>42187893 #>>42187928 #
coryrc ◴[] No.42186930[source]
To be more specific, let's say I can write an e2e test on an actual pre-prod environment, or I can invest much development and ongoing maintenance to develop stub responses so that the test can run before submit in a partial system. How much is "shifting left" worth versus investing in speeding up the deployment pipeline and fast flag rollout and monitoring?

Nobody I've worked with can ever quantify the ROI for elaborate take test environments, but somebody made an okr so there you go. Far be it we follow actual research done on modern software... http://dora.dev

replies(3): >>42187551 #>>42187771 #>>42188365 #
1. cogman10 ◴[] No.42187551[source]
In fact, in my experience, these elaborate test environments and procedures cripple products.

I'm firmly of the opinion that if a test can't be run completely locally then it shouldn't be run. These test environments can be super fragile. They often rely on a symphony of teams ensuring everything is in a good state all the time. But, what happens more often than not, is one team somewhere deploys a broken version of their software to the test environment (because, of course they do) in order to run their fleet of e2e tests. That invariably ends up blowing up the rest of the org depending on that broken software and heaven help you if the person that deployed did it at 5pm and is gone on vacation.

This rippling failure mode happens because it's easier to write e2e tests which depend on a functional environment than it is to write and maintain mock services and mock data. Yet the mock services and data are precisely what you need to ensure someone doesn't screw up the test environment in the first place.

replies(2): >>42187642 #>>42187699 #
2. coryrc ◴[] No.42187642[source]
There are many reasons you want to be able to turn up your whole stack quickly; disaster recovery is just one of them. And if you can turn up your environment quickly then why not have multiple staging environments? You start with the most recent of yours and everyone else's prod version, then carrots combinations from there

Obviously this is for large-scale systems and not small teams.

3. jeltz ◴[] No.42187699[source]
You are not wrong but I have had many experiences where mock services resulted in totally broken systems since they were incorrectly mocked. In complex systems it is very hard to accurately mock interactions.

Personally I think the real issue is not the testing strategy but the system itself. Many organizations make systems overly complex. A well structured monolith with a few supporting services is usually easy to test while micro service/SOA hell is not.