I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
That often happens with engineers who pushed that tool getting promoted a few times and building their career on said tool, which is where I have seen this being pushed down, but I think it is important that in most cases are still engineers
This was available for a long time as an extension as part of Redis Stack, but most hosted Redis providers don't make extensions available (I'm assuming due to nuances in Redis's not-quite-open licensing).
If cloud providers which include Redis are now going to include this, it opens up a lot of potential for my use case.
This doesn't sound like a Redis issue, you're just not using the right tool for the job.
Doing everything is a recipe for bloat. In a database, in a distributed cache, in a programming language, in anything.
I think it wouldn't be unfair to compare it to Golang, which has in my opinion a quite unbloated stdlib which allows you do almost anything without external libraries!
I just gave valkey-container its 100th star https://github.com/valkey-io/valkey-container
There are few things that are interesting for me about this discussion related to complexity and use cases outside the scope.
1. You can still download Redis and type "make" and it builds without dependencies whatsoever like in the past, and that's it.
2. Then you run it and use just the subset of Redis that you like. The additional features are not imposed to the user, nor they impact the rest of the user experience. This is, actually, a lot like how it used to be when I handled the project: I added Pub/Sub, Lua scripting, geo indexing, streams, all stuff that, at first, people felt like they were out of scope, and yet many shown to be among the best features. Now it is perfectly clear that Pub/Sub belonged to Redis very well, for instance.
3. This release has improvements to the foundations, in terms of latency, for example. This means that even if you just use your small subset, you can benefit from the continued developments. Sometimes systems take the bad path of becoming less efficient over time.
So I understand the sentiment but I still see Redis remaining quite lean, at least for the version 8 that I just downloaded and I am inspecting right now.
Of course if what you need is a traditional DB then go with a traditional DB
But it offers those data structures and other stuff that fewer competitors have (or has it in a more quirky way)
Sure, there's persistence but it always seemed like an afterthought. It's also unavailable in most hosted Redis services or very expensive when it's available.
There's also HA and clustering, which makes data loss less likely but that might not be good enough.
For the people wondering who would ever use Redis this way, check out Sidekiq! https://sidekiq.org/ "Ephemeral" jobs can be a big trade-off that many Rails teams aren't really aware of until it's too late. Reading the Sidekiq docs doesn't mention this, last time I checked, so I can't really blame people when they go for the "standard"/"best" job system and they are surprised when it gets super expensive to host it.
What you say is good in theory, but doesn’t hold in practice.
We use memcached instead of Redis. Cache different layers in different instances so one going down hurts but doesn’t kill. Or at least it didn’t when I was there. They’ve been trying to squeeze cluster sizes and I guarantee that’s no longer sufficient and multiple open circuit breakers happen if more than one cache goes tits up.
Both running in-memory speed up an application, but you can survive both being nuked (minus potentially logging everyone out).
It was so slow and terrible.
It's nice if the stuff stays there, because my application will be faster. If it goes down I need a few seconds to re-populate it and we're back.
Redis Enterprise has started to lean into being able to do this too.
If you're not redistributing, then you're using it wrong. Only once redistribution has successfully occurred (i.e. you can reboot the redis process and recover), is the goal of redis fulfilled.
Also, redis timeseries offers the ability to downsample to some defined period which is really handy (and afaik isn't really provided by other timeseries databases) as well as set a retention policy.
Remember how I mentioned circuit breakers?
The only time we had trouble with memcached was when we set the max memory a little too high and it restarted due to lack of memory. Which of course likes to happen during high traffic.
Not fixing those would have resulted in a metastable situation.
https://cloud.google.com/blog/products/databases/announcing-... https://upcloud.com/blog/now-supporting-valkey https://aiven.io/blog/introducing-aiven-for-valkey https://www.instaclustr.com/blog/valkey-now-available/ https://elest.io/open-source/valkey
Lots of Lua scripting and calculations being done on Redis that has nothing to do with the data that's local to Redis. It's infuriating.
Here is the docs https://www.dragonflydb.io/docs/command-reference/hashes/hex...