←back to thread

240 points yusufaytas | 2 comments | | HN request time: 0.002s | source
Show context
eknkc ◴[] No.41895577[source]
I tend to use postgresql for distributed locking. As in, even if the job is not db related, I start a transaction and obtain an advisory lock which stays locked until the transaction is released. Either by the app itself or due to a crash or something.

Felt pretty safe about it so far but I just realised I never check if the db connection is still ok. If this is a db related job and I need to touch the db, fine. Some query will fail on the connection and my job will fail anyway. Otherwise I might have already lost the lock and not aware of it.

Without fencing tokens, atomic ops and such, I guess one needs a two stage commit on everything for absolute correctness?

replies(2): >>41895763 #>>41899374 #
candiddevmike ◴[] No.41895763[source]
One gotcha maybe with locks is they are connection specific AFAIK, and in most libraries you're using a pool typically. So you need to have a specific connection for locks, and ensure you're using that connection when doing periodic lock tests.
replies(2): >>41899383 #>>41901248 #
1. Quekid5 ◴[] No.41899383[source]
Why would locks be connection-specific? ... considering that only one operation can be in flight at a time on a single connection. (Usually, at least.)
replies(1): >>41899585 #
2. joatmon-snoo ◴[] No.41899585[source]
Different DBs implement locks differently.

Postgres allows obtaining advisory locks at either the session _or_ transaction level. If it's session-level, then you have, ergo, a connection-level lock.

https://www.postgresql.org/docs/current/explicit-locking.htm...