←back to thread

492 points storf45 | 2 comments | | HN request time: 0.597s | source
Show context
softwaredoug ◴[] No.42157774[source]
The way to deal with this is to constantly do live events, and actually build organizational muscle. Not these massive one off events in an area the tech team has no experience in.
replies(9): >>42158542 #>>42158774 #>>42158782 #>>42158854 #>>42158930 #>>42159942 #>>42160430 #>>42160978 #>>42168444 #
mbrumlow ◴[] No.42158774[source]
I have this argument a lot in tech.

We should always be doing (the thing we want to do)

Somme examples that always get me in trouble (or at least big heated conversations)

1. Always be building: It does not matter if code was not changed, or there has been no PRs or whatever, build it. Something in your org or infra has likely changed. My argument is "I would rather have a build failure on software that is already released, than software I need to release".

2. Always be releasing: As before it does not matter if nothing changed, push out a release. Stress the system and make it go through the motions. I can't tell you how many times I have seen things fail to deploy simply because they have not attempted to do so in some long period of time.

There are more just don't have time to go into them. The point is if "you did it, and need to do it again ever in the future, then you need to continuously do it"

replies(6): >>42158807 #>>42158896 #>>42159793 #>>42159969 #>>42161140 #>>42161623 #
skybrian ◴[] No.42159793[source]
Doing dry runs regularly makes sense, but whether actually shipping it makes sense seems context-dependent. It depends on how much you can minimize the side effects of shipping a release.

Consider publishing a new version of a library: you'd be bumping the version number all the time and invalidating caches, causing downstream rebuilds, for little reason. Or if clients are lazy about updating, any two clients would be unlikely to have the same version.

Or consider the case when shipping results in a software update: millions of customer client boxes wasting bandwidth downloading new releases and restarting for no reason.

Even for a web app, you are probably invalidating caches, resulting in slow page loads.

With enough work, you could probably minimize these side effects, so that releasing a new version that doesn't actually change anything is a non-event. But if you don't invalidate the caches, you're not really doing a full rebuild.

So it seems like there's a tension between doing more end-to-end testing and performance? Implementing a bunch of cache levels and then not using it seems counterproductive.

replies(4): >>42159844 #>>42160284 #>>42161483 #>>42161780 #
lxgr ◴[] No.42159844[source]
It's very hard to do a representative dry run when the most likely potential points of failure are highly load-dependent.

You can try and predict everything that'll happen in production, but if you have nothing to extrapolate from, e.g. because this is your very first large live event, the chances of getting that right are almost zero.

And you can't easily import that knowledge either, because your system might have very different points of failure than the ones external experts might be used to.

replies(2): >>42159985 #>>42160723 #
leptons ◴[] No.42160723[source]
They could have done a dry run. They could have spun up a million virtual machines somewhere, and tested their video delivery for 30 minutes. Even my small team spins up 10,000 EC2 instances on the regular. Netflix has the money to do much more. I'm sure there are a dozen ways they could have stress-tested this beforehand. It's not like someone sprang this on them last week and they had to scramble to put together a system to do it.
replies(4): >>42160912 #>>42161245 #>>42164365 #>>42164503 #
1. throwaway2037 ◴[] No.42164365[source]

    > Even my small team spins up 10,000 EC2 instances on the regular.
Woah, this sounds very cool. Can you share more details?
replies(1): >>42166996 #
2. leptons ◴[] No.42166996[source]
I manage ~3000 customized websites based on the same template code. Sometimes we make changes to the template code that could affect the customizations - it is practically impossible to predict what might cause a problem due to the nature of the customizations. We'll take before and after screenshots of every page on every site, so it can get into the 100s of thousands of screenshots. We'll then run a diff on the screenshots to see what changed, reviewing the screenshots with the most significant changes. Then we'll address the problems we find and deploy the fixed release.

When we do these large screenshot operations, the EC2 instances are running for maybe 15 or 20 minutes total. It's not exactly cheap, but losing clients because we broke their site is something we want to avoid. The sites are hosted on a 3rd party service, and we're rate-limited by IP address, so to get this done in a reasonable amount of time we need to spin up 10,000 EC2 instances to distribute the work. We have our own software to manage the EC2 instances. It's honestly pretty simple, but effective.