←back to thread

82 points lsferreira42 | 3 comments | | HN request time: 0.001s | source
Show context
marklubi ◴[] No.42200044[source]
This sort of makes me sad. Redis has strayed from what its original goal/purpose was.

I’ve been using it since it was in beta. Simple, clear, fast.

The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.

replies(4): >>42201722 #>>42201795 #>>42202030 #>>42202451 #
reissbaker ◴[] No.42201795[source]
What do you think doesn't belong in Redis? I've always viewed Redis as basically "generic datastructures in a database" — as opposed to say, Memcached, which is a very simple in-memory-only key/value store (that has always been much faster than Redis). It's hard for me to point to specific features and say: that doesn't belong in Redis! Because Redis has generally felt (to me) like a grab bag of data structures + algorithms, that are meant to be fairly low-latency but not maximally so, where your dataset has to fit in RAM (but is regularly flushed to disk so you avoid cold start issues).
replies(5): >>42202143 #>>42202153 #>>42202379 #>>42202623 #>>42207143 #
ChocolateGod ◴[] No.42202153[source]
If your application can't survive the Redis server being wiped without issues, you're using Redis wrong.
replies(5): >>42202525 #>>42202734 #>>42202747 #>>42202843 #>>42203450 #
hinkley ◴[] No.42202747[source]
If your application is happy with an empty Redis, then why run Redis in the first place?

What you say is good in theory, but doesn’t hold in practice.

We use memcached instead of Redis. Cache different layers in different instances so one going down hurts but doesn’t kill. Or at least it didn’t when I was there. They’ve been trying to squeeze cluster sizes and I guarantee that’s no longer sufficient and multiple open circuit breakers happen if more than one cache goes tits up.

replies(1): >>42202790 #
1. ChocolateGod ◴[] No.42202790{3}[source]
Cache and Sessions

Both running in-memory speed up an application, but you can survive both being nuked (minus potentially logging everyone out).

replies(2): >>42206755 #>>42208887 #
2. hinkley ◴[] No.42206755[source]
No. Cache protects your other services from peak traffic. Which often leads to wrong sizing of those services to reap efficiency gains. Autoscaling can’t necessarily keep up with that sort of problem.

Remember how I mentioned circuit breakers?

The only time we had trouble with memcached was when we set the max memory a little too high and it restarted due to lack of memory. Which of course likes to happen during high traffic.

Not fixing those would have resulted in a metastable situation.

3. marklubi ◴[] No.42208887[source]
Pub/Sub is a huge use case for me