They don't have the full suite of GCS's capabilities (https://cloud.google.com/storage/docs/request-preconditions#...) but it's something.
I'm curious to hear if you have examples of any database using only object storage as a backend, because back when I started, I couldn't fin any.
My approach on S3 would be to ensure to modify the ETag of an object whenever other transactions looking at it must be blocked. This makes it easier to use conditional reads (https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...) on COPY or GET operations.
For write, I would use PUT on a temporary staging area and then conditional COPY + DELETE afterward. This is certainly slower than GCS, but I think it should work.
Locking without modifying the object is the part that needs some optimization though.
https://docs.datomic.com/operation/architecture.html
(However they cheat with dynamo lol)
There's also some listed here
https://davidgomes.com/separation-of-storage-and-compute-and...
And as you mention, Datomic uses DynamoDB as well (so, not a pure s3 solution). What I'm proposing is to only use object storage for everything, pay the price in latency, but don't give up on throughput, cost and consistency. The differentiator is that this comes with strict serializability guarantees, so this is not an eventually consistent system (https://jepsen.io/consistency/models/strong-serializable).
No matter how sophisticated the caching is, if you want to retain strict serializability, writes must be confirmed by s3 and reads must validate in s3 before returning, which puts a lower bound on latency.
I focused a lot on throughput, which is the one we can really optimize.
Hopefully that's clear from the blog, though.
Basically an in-memory database which uses S3 as cold storage. Definitely an interesting approach, but no transactions AFAICT.