←back to thread

285 points ajhit406 | 4 comments | | HN request time: 0s | source
Show context
bluehatbrit ◴[] No.41837834[source]
This is probably a really stupid question, but how would one handle schema migrations with this kind of setup? My understanding is it's aimed at having a database per-tenant (or even more broken down than that). Is there a sane way of handling schema migrations, or is the expectation that these databases are more short-lived and so you support multiple versions of the db (DO) until it's deleted?

In my head, this would be a fun way to build a bookmark service with a DO per user. But as soon as you want to add a new field to an existing table, you meet a pretty tricky problem of getting that change to each individual DO. Perhaps that example is too long lived though, and this is designed for more ephemeral usage.

If anyone has any experience with this, I'd be really interested to know what you're doing.

replies(1): >>41838310 #
simonw ◴[] No.41838310[source]
You'd need to roll your own migrations.

I have a version of that for SQLite written in Python, but I'm not sure if you could run that in Durable Objects - maybe via WASM and PyOdide? Otherwise you'd have to port it to JavaScript.

https://github.com/simonw/sqlite-migrate

replies(1): >>41838674 #
1. bluehatbrit ◴[] No.41838674[source]
Appreciate the response (and the blog post itself)! I probably worded my question poorly, but I'm more wondering about executing schema migrations against a large number of DO's as part of a deployment (such as 1 per customer).

I suppose the answer is "it's easier to have 1 central database/DO", but it feels like this approach to data storage really shines when you can have a DO per tenant.

replies(1): >>41839488 #
2. simonw ◴[] No.41839488[source]
A pattern where you check for and then execute any necessary migrations on initialization of a Durable Object would actually work pretty well I think - presumably you can update the code for these things without erasing the existing database?
replies(1): >>41839565 #
3. bluehatbrit ◴[] No.41839565[source]
Ah yes, that would work pretty well! You'd have to be able to guarantee any migrations can run within the timeouts, but at a per-tenant level that should be very doable for most cases. Not sure why I didn't think of that approach - great idea.

I might have to try this out now.

replies(1): >>41839672 #
4. simonw ◴[] No.41839672{3}[source]
I think those SQLite databases are capped at 1GB right now, so even complex migrations (that work by creating a new temporary table, copying old data to it and then atomically renaming it) should run in well under a second.