I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
Sure, there's persistence but it always seemed like an afterthought. It's also unavailable in most hosted Redis services or very expensive when it's available.
There's also HA and clustering, which makes data loss less likely but that might not be good enough.
For the people wondering who would ever use Redis this way, check out Sidekiq! https://sidekiq.org/ "Ephemeral" jobs can be a big trade-off that many Rails teams aren't really aware of until it's too late. Reading the Sidekiq docs doesn't mention this, last time I checked, so I can't really blame people when they go for the "standard"/"best" job system and they are surprised when it gets super expensive to host it.
What you say is good in theory, but doesn’t hold in practice.
We use memcached instead of Redis. Cache different layers in different instances so one going down hurts but doesn’t kill. Or at least it didn’t when I was there. They’ve been trying to squeeze cluster sizes and I guarantee that’s no longer sufficient and multiple open circuit breakers happen if more than one cache goes tits up.
Both running in-memory speed up an application, but you can survive both being nuked (minus potentially logging everyone out).
It's nice if the stuff stays there, because my application will be faster. If it goes down I need a few seconds to re-populate it and we're back.
If you're not redistributing, then you're using it wrong. Only once redistribution has successfully occurred (i.e. you can reboot the redis process and recover), is the goal of redis fulfilled.
Remember how I mentioned circuit breakers?
The only time we had trouble with memcached was when we set the max memory a little too high and it restarted due to lack of memory. Which of course likes to happen during high traffic.
Not fixing those would have resulted in a metastable situation.