←back to thread

379 points Sirupsen | 1 comments | | HN request time: 0.217s | source
Show context
eknkc ◴[] No.40921379[source]
Is there a good general purpose solution where I can store a large read only database in s3 or something and do lookups directly on it?

Duckdb can open parquet files over http and query them but I found it to trigger a lot of small requests reading bunch of places from the files. I mean a lot.

I mostly need key / value lookups and could potentially store each key in a seperate object in s3 but for a couple hundred million objects.. It would be a lot more managable to have a single file and maybe a cacheable index.

replies(5): >>40922137 #>>40922166 #>>40922842 #>>40923712 #>>40927099 #
jiggawatts ◴[] No.40922137[source]
> trigger a lot of small requests reading bunch of places from the files. I mean a lot.

That’s… the whole point. That’s how Parquet files are supposed to be used. They’re an improvement over CSV or JSON because clients can read small subsets of them efficiently!

For comparison, I’ve tried a few other client products that don’t use Parquet files properly and just read the whole file every time, no matter how trivial the query is.

replies(1): >>40924681 #
eknkc ◴[] No.40924681[source]
This makes sense but the problem I had with duckdb + parquet is it looks like there is no metadata caching so each and every query triggers a lot of requests.

Duckdb can query a remote duckdb database too, in that case it looks like there is caching. Which might be better.

I wonder if anyone actually worked on a specific file format for this use case (relatively high latency random access) to minimize reads to as little blocks as possible.

replies(1): >>40924698 #
1. jiggawatts ◴[] No.40924698[source]
Sounds like a bug or missing feature in DuckDB more than an issue with the format