←back to thread

134 points samuel246 | 1 comments | | HN request time: 0.001s | source
Show context
taeric ◴[] No.44459350[source]
This reminds me of the use of materialized views as both a cache strategy and as an abstraction helper.
replies(1): >>44460004 #
bravesoul2 ◴[] No.44460004[source]
And they too can slow things down. Like all caches can. Like Redis can. Cache is a leaky abstraction.

(Although a materialised view is more like an index than a cache. The view won't expire requiring you to rebuild.)

replies(1): >>44460976 #
necovek ◴[] No.44460976[source]
I believe this same language use is what makes this article confusing: Redis is not a cache, it is a key value store. Caching is usually implemented using key value stores, but it is not an abstraction (leaky or not).

In RDBMS contexts, index really is a caching mechanism (a cache) managed by the database system (query planner needs to decide when it's best to use one index or another).

But as you note yourself even in these cases where you've got cache management bundled with the database, having too many can slow down (even deadlock) writes so much as the database tries to ensure consistency between these redundant data storage elements.

replies(1): >>44462823 #
bravesoul2 ◴[] No.44462823[source]
I thought Redis grew up as a KV cache and persistent storage came later.

In some sense though. If it ain't L1 it's storage :)

replies(2): >>44463693 #>>44467634 #
1. ◴[] No.44463693{3}[source]