←back to thread

159 points jbredeche | 4 comments | | HN request time: 0.002s | source
1. CBLT ◴[] No.45484252[source]
Git worktrees are global mutable state; all containers on your laptop are contending on the same git database. This has a couple of rough edges, but you can work around it.

I prefer instead to make shallow checkouts for my LXC containers, then my main repo can just pull from those. This works just like you expect, without weird worktree issues. The container here is actually providing a security boundary. With a worktree, you need to mount the main repo's .git directory; a malicious process could easily install a git hook to escape.

replies(2): >>45531310 #>>45533758 #
2. chrisweekly ◴[] No.45531310[source]
good point
3. threecheese ◴[] No.45533758[source]
Cool. Operationally, are you using some host-resident non-shallow repo as your point of centralization for the containers, or are you using a central network-hosted repo (like github)?

If the former, how are you getting the shallow clones to the container/mount, before you start the containerized agent? And when the agent is done, are you then adding its updated shallow clones as remotes to that “central” local repository clone and then fetching/merging?

If the latter, I guess you are just shallow-cloning into each container from the network remote and then pushing completed branches back up that way.

replies(1): >>45542557 #
4. CBLT ◴[] No.45542557[source]
The former. I clone from file:// URIs.

I just have the file path to the inside of my LXC container. If you're using Docker you can just mount it. I only need the path twice (for clone, and for adding a git remote). After that I just use git to reference the remote for everything.

I probably don't have the perfect workflow here. Especially if you're spinning up/down Docker containers constantly. I'm basically performing a Torvalds role play, where I have lieutenant AI agents asking me to pull their trees.