Most active commenters

    ←back to thread

    264 points davidgomes | 17 comments | | HN request time: 1.325s | source | bottom
    1. justin_oaks ◴[] No.41875268[source]
    My upgrade policy for everything:

    Significant security vulnerability? Upgrade

    Feature you need? Upgrade

    All other reasons: Don't upgrade.

    Upgrading takes effort and it is risky. The benefits must be worth the risks.

    replies(5): >>41875370 #>>41875465 #>>41876163 #>>41876254 #>>41876707 #
    2. natmaka ◴[] No.41875370[source]
    Suggestion: add "End of life (no more maintenance for this version)? Upgrade"
    replies(1): >>41876193 #
    3. throwaway918299 ◴[] No.41875465[source]
    Here’s another reason to upgrade: your version is end of life and your cloud provider forced it.

    Thank you Amazon!

    replies(1): >>41876897 #
    4. hinkley ◴[] No.41876163[source]
    Once your version doesn’t receive security fixes you’re one CERT advisory away from having your whole week pre-empted by an emergency upgrade.

    I’ve been there with products that were still internal at the time. I can only imagine how much fun that is with a public product. But then I do have a pretty vivid imagination. We changed to periodic upgrades after that to avoid the obvious problem staring us in the face.

    5. Gormo ◴[] No.41876193[source]
    Why? If the implemented featureset meets your needs, and there are no unresolved bugs or security vulnerabilities relevant to your use cases, what further "maintenance" do you need?
    replies(2): >>41876247 #>>41876460 #
    6. abraham ◴[] No.41876247{3}[source]
    When a critical security patch comes out, you don't want to have to to do a major version upgrade to get it.
    7. Gigachad ◴[] No.41876254[source]
    Eventually you get forced to update it when the other stuff you use starts having minimum version requirements.
    8. FearNotDaniel ◴[] No.41876460{3}[source]
    Because when the maintainers have stopped patching that version against all known security vulnerabilities, that doesn't stop the bad guys from looking for more vulnerabilities. When they find one, it will get exploited. So you either wake up to an email from Have I Been Pwned to say all your customer data has been exfiltrated [0], or (if you're lucky) you have a mad scramble to do that update before they get you.

    [0] Probably including those passwords you didn't hash, and those credit card numbers you shouldn't be storing in the first place because, what the heck, it meets your needs.

    9. occz ◴[] No.41876707[source]
    Upgrading when multiple versions behind is significantly more risky than doing it when the update is relatively fresh.

    Additionally, actions done frequently are less risky than actions done rarely, since you develop skills in performing that action as an organization - see high deployment frequency as a strategy of managing deployment risk.

    This adds up to continuous upgrading being the least risky option in aggregate.

    replies(2): >>41877368 #>>41880107 #
    10. mkesper ◴[] No.41876897[source]
    Yes, this is actually a good thing and comes with warnings beforehand.
    replies(1): >>41878309 #
    11. kortilla ◴[] No.41877368[source]
    Not if software regressions are the main concern.
    12. throwaway918299 ◴[] No.41878309{3}[source]
    I agree. It helped me completely bypass any discussion from management about “not high enough priority”. Amazon definitely did me a favour in many ways.
    13. ttfkam ◴[] No.41880107[source]
    Upgrading from v11 to v16 is not materially different in Postgres from v14 to v16. Same tools. Same strategies.
    replies(1): >>41880280 #
    14. enraged_camel ◴[] No.41880280{3}[source]
    We are planning to upgrade from 11 to 17 soon. Even thinking about it is giving me ulcers. Our infra provider said we actually need to upgrade to 13 first, and then to 17. They did not provide a reason.
    replies(2): >>41880673 #>>41895330 #
    15. Tostino ◴[] No.41880673{4}[source]
    I went through a postgres 10 > 16 upgrade recently. What made it easier was just doing a test run of the upgrade process.

    Did a restore to a stage environment, worked on my upgrade scripts until I was happy (deployed to VMs with ansible, so manual work to write the upgradeprocessfor me), restored again and ran the upgrade process fresh, and then tested my application, backup scripts, restores, etc. Had everything working entirely smoothly multiple times before pulling the trigger in production.

    No stress at all when we did it in prod.

    replies(1): >>41880884 #
    16. ttfkam ◴[] No.41880884{5}[source]
    Yep, that was our strategy as well: just keep iterating until the script run cleanly from start to finish without errors.
    17. fillest ◴[] No.41895330{4}[source]
    A personal warning about 17.0 if you use streaming replication: secondary replica leaks memory quite actively. 16.4 is OK.