Can we use `blockConcurrencyWhile` inside `fetch` method of a Durable Object?
Scenario;
There is a single instance of a durable object responsible only for registering users. This durable object puts user data into a KV namespace. Only this durable object writes to this KV namespace.
First, it reads KV if the user already exists and then writes. To avoid race, can I use
blockConcurrencyWhile
inside fetch
?
I know that durable objects already have transactional storage but I need to write data to KV namespace, that is used by other workers as read-only.
I only see blockConcurrencyWhile
used in durable object initialization that's why I ask.16 Replies
Yes, like this:
Thank you for your quick answer and code snippet 🙂 . So we can have consistent writes to KV. The only down side is, it is not good for high traffic routes, but for user registration it is ok I think.
Keep in mind KV is not consistent. Even if you insert non-concurrently (with
blockConcurrencyWhile
) it's still possible other Workers will read the old value for up to 60s after the insert.
The only thing blockConcurrencyWhile
does is ensure that two writes don't overwrite one another. It doesn't ensure reads are always up to date.And it will only prevent overwriting if you aren't reading from KV too
AFAIK reads won't ever overwrite a write?
Like, if you read a value it might be stale but the write will still win eventually.
No, I mean that if your operation depends on mutating the state of a value(like addition), then a read-then-write model with KV won't be consistent,
blockConcurrencyWhile
or noAh, I see. Yeah, that's true.
If it is just replacing a value, then it doesn't matter, but then you wouldn't need a DO for it at all
Yeah. The only way around that would be to use DO's consistent key-value store to store the counter and then update in KV every time you write it.
I see that, what I think about reading is workers will try to get user from KV, and on fail, it will ask to this durable object as second try.
Kv is read only for login purposes.
So yeah, that would work well as long as you don't ever update a user based on the user's last value. The reason the write-after-read won't work well is because:
- DO does a write, which might not be visible for a while
- DO does a read and then another write based on that read, but the read might have been stale
- so you just wrote a value based on an incorrect read
Should be fine, as long as your DO never reads from KV, only writes
If you are always doing a write, with no read, then you should be good.
What I understand is, after DO shutdown, DO may spawn on another location where KV is not synced yet. So for my purposes this will not work actually 😬
Yeah, that's possible but very unlikely. Could you perhaps use the key-value API DO provides?
That is consistent.
The reason DO migration is unlikely is because DOs are pinned to the datacenter that created them unless there is some sort of shutdown or disaster in that DC, in which case they will migrate to a nearby one (still very close but different KV cache indeed).
The best solution looks like using DOs storage as it is designed for this. My concern is that using only one instance DO for user authentication may cause slow login around the world but I think it is not a big problem since for further requests I can use session data that is written to KV.
Thank you all again for going into detail about the subject. Love you ❤️
I've never thought of this and it's a good idea.
My biggest concern is registering two users with the same username. This may solve this issue; I can spawn durable object using the username, and if the object doesn't have stored data, it means username is available.
Thank you for the idea