Most active commenters

    ←back to thread

    134 points samuel246 | 11 comments | | HN request time: 0.819s | source | bottom
    Show context
    ckdot2 ◴[] No.44458190[source]
    "I think now caching is probably best understood as a tool for making software simpler" - that's cute. Caching might be beneficial for many cases, but if it doesn't do one thing then this is simplifying software. There's that famous quote "There are only two hard things in Computer Science: cache invalidation and naming things.", and, sure, it's a bit ironical, but there's some truth in there.
    replies(11): >>44458265 #>>44458365 #>>44458502 #>>44459091 #>>44459123 #>>44459372 #>>44459490 #>>44459654 #>>44459905 #>>44460039 #>>44460321 #
    1. Traubenfuchs ◴[] No.44459372[source]
    I never understood this meme.

    We use caching a lot, anything that gets cached can only be written by one service each. The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS, to clear/update their cache.

    Alternatively we cache stuff with just a TTL, when immediate cache invalidation is not important.

    Where‘s the struggle?

    replies(8): >>44459400 #>>44459529 #>>44459632 #>>44459774 #>>44461198 #>>44463192 #>>44464161 #>>44465957 #
    2. porridgeraisin ◴[] No.44459400[source]
    Here's one: everybody invalidating and refreshing their cache at the same time can cause a thundering herd problem.
    3. hmottestad ◴[] No.44459529[source]
    Does SQS guarantee delivery to all clients? If it does then that’s doing a lot of heavy lifting for you.

    If it doesn’t guarantee delivery, then I believe you will at some point have a client that reads a cached value thinking it’s still valid because the invalidation message got lost in the network.

    replies(1): >>44459789 #
    4. williamdclt ◴[] No.44459632[source]
    You don’t support read-your-own-write and your cache data might be stale for arbitrarily long. These relaxed consistency constraints make caching a lot easier. If that’s acceptable to your use cases then you’re in a great place! If not… well, at scale you often need to find a way for it to be acceptable anyway
    5. pton_xd ◴[] No.44459774[source]
    > Where‘s the struggle?

    If there are no real consequences for reading stale data, and your writes are infrequent enough, then indeed you're lucky and have a relatively simple problem.

    6. maccard ◴[] No.44459789[source]
    Eventually. The problem is that eventually delivering that message will result in clients assuming that it will always be the same, when it’s not.
    7. motorest ◴[] No.44461198[source]
    > I never understood this meme.

    If you don't understand how and why and when eventual consistency is a problem, you will never understand why cache invalidation is hard.

    By the sound of your example, you only handle scenarios where naive approaches to cache invalidation serve your needs, and you don't even have to deal with problems caused by spikes to origin servers. That's perfectly fine.

    Others do. They understand the meme. You can too if you invest a fee minutes reading up on the topic.

    8. graealex ◴[] No.44463192[source]
    That's because relying on a TTL simplifies the concept of caching, and makes invalidation trivial, and also inflexible.

    It's used in DNS, which already was an example here. There is no way to be sure clients see an updated value before end of TTL. As a result, you have to use very conservative TTLs. It's very inefficient.

    replies(1): >>44467115 #
    9. Cthulhu_ ◴[] No.44464161[source]
    > Where‘s the struggle?

    > anything that gets cached can only be written by one service each

    How do you guarantee it's only written by one service each? Sounds like locking across network boundaries, which is not easy.

    > The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS

    SNS and SQS are both nontrivial services (at least you don't have to build / maintain them I suppose) that require training to use effectively and avoid any possible footguns

    I think you're underestimating the complexity in your own solution, and you're probably lucky that some of the harder problems have already been solved for you.

    10. tengbretson ◴[] No.44465957[source]
    I've never really understood it either. In my experience, in order for a cache to be a possible solution to a given problem at all, you must either:

    1. Be content with/resilient to the possibility of stale data.

    2. Gatekeep all reads and writes (for some subset of the key space) through a single thread.

    That's basically it.

    11. ahoka ◴[] No.44467115[source]
    You can’t be sure even after the TTL to be fair.