←back to thread

Tower Defense: Cache Control

(www.jasonthorsness.com)
57 points jasonthorsness | 6 comments | | HN request time: 0.433s | source | bottom
1. grep_it ◴[] No.44008029[source]
> Why Not Redis? I have a single VPS, so I can get by with a simple SQLite database. If I had many instances of my API on separate servers

Just to push against this a bit. Redis can be very low memory cost and is very easy to run (I give it 5mb). I have a small single server with a few instances of my API that let's me cache pretty much everything I need.

replies(3): >>44008331 #>>44008825 #>>44010089 #
2. jasonthorsness ◴[] No.44008331[source]
Huh, I never thought of running it just on the one same node. I guess that would better prepare for a scale-up later.
replies(1): >>44008786 #
3. johnmaguire ◴[] No.44008786[source]
You may even be able to scale horizontally with many nginx+redis nodes, if the cache does not need to be shared.
4. hangonhn ◴[] No.44008825[source]
It's so obvious once I've heard someone else say it but wow that's really clever actually.
replies(1): >>44011016 #
5. arp242 ◴[] No.44010089[source]
When I last benchmarked Redis vs. PostgreSQL for a simple k/v cache it was about ~1ms for PostgreSQL to fetch a key, and ~0.5ms for Redis. Faster, but not really noticeably so. I haven't benchmarked SQLite, but I would be surprised if the numbers are substantially different.

Of course Redis can do other things than just a k/v cache, and at scale you just want to offload some load from your main SQL server. But for "small" use cases my conclusion was that Redis doesn't really add anything. OTOH it's also not especially difficult to run so it also not a big problem to use it, but by and large it seems superfluous.

6. thr0waway2i ◴[] No.44011016[source]
Before "The Cloud" taught people to split everything into dedicated VMs, it was common to run multiple services on the same machine.