←back to thread

379 points Sirupsen | 4 comments | | HN request time: 0s | source
Show context
eknkc ◴[] No.40921379[source]
Is there a good general purpose solution where I can store a large read only database in s3 or something and do lookups directly on it?

Duckdb can open parquet files over http and query them but I found it to trigger a lot of small requests reading bunch of places from the files. I mean a lot.

I mostly need key / value lookups and could potentially store each key in a seperate object in s3 but for a couple hundred million objects.. It would be a lot more managable to have a single file and maybe a cacheable index.

replies(5): >>40922137 #>>40922166 #>>40922842 #>>40923712 #>>40927099 #
tionis ◴[] No.40922842[source]
You could use a sqlite database and use range queries using something like this: https://github.com/psanford/sqlite3vfshttp https://github.com/phiresky/sql.js-httpvfs

Simon Willison wrote about it: https://simonwillison.net/2022/Aug/10/sqlite-http/

replies(2): >>40924060 #>>40924633 #
1. eknkc ◴[] No.40924633[source]
Yep this thing is the reason I thought about doing it in the first place. Tried duckdb which has built in support for range requests over http.

Whole idea makes sense but I feel like the file format should be specifically tuned for this use case. Otherwise you end up with a lot of range requests because it was designed for disk access. I wondered if anything was actually designed for that.

replies(1): >>40925697 #
2. hobofan ◴[] No.40925697[source]
Parquet and other columnar storage formats are essentially already tuned for that.

A lot of requests in themselves shouldn't be that horrible with Cloudfront nowadays, as you both have low latency and with HTTP2 a low-overhead RPC channel.

There are some potential remedies, but each come with significant architetural impact:

- Bigger range queries; For smallish tables, instead of trying to do point-based access for individual rows, instead retrieve bigger chunks at once and scan through them locally -> Less requests, but likely also more wasted bandwidth

- Compute the specific view live with a remote DuckDB -> Has the downside of having to introduce a DuckDB instance that you have to manage between the browser and S3

- Precompute the data you are interested into new parquest files -> Only works if you can anticipate the query patterns enough

I read in the sibling comment that your main issue seems to be re-reading of metadata. DuckDB is AFAIK able to cache the metadata, but won't across instances. I've seen someone have the same issue, and the problem was that they only created short-lived DuckDB in-memory instances (every time the wanted to run a query), so every time the fresh DB had to retrieve the metadata again.

replies(1): >>40925766 #
3. eknkc ◴[] No.40925766[source]
Thanks for the insights. Precomputing is not really suitable for this and the thing is, I'm mostly using it as a lookup table on key / value queries. I know Duckdb is mostly suitable for aggregation but the http range query support was too attractive to pass on.

I did some tests, querying "where col = 'x'". If the database was a remote duckdb native db, it would issue a bunch of http range requests and the second exact call would not trigger any new requests. Also, querying for col = foo and then col = foob would yield less and less requests as I assume it has the necesary data on hand.

Doing it on parquet, with a single long running duckdb cli instance, I get the same requests over and over again. The difference though, I'd need to "attach" the duckdb database under a schema name but would query the parquet file using "select from 'http://.../x.parquet'" syntax. Maybe this causes it to be ephemeral for each query. Will see if the attach syntax also works for parquet.

replies(1): >>40925941 #
4. hobofan ◴[] No.40925941{3}[source]
I think both should work, but you have to set the object cache pragma IIRC: https://duckdb.org/docs/configuration/pragmas.html#object-ca...