Is there a way to deal with the

Is there a way to deal with the graveyard that is created from unread items being deleted by the TTL? Had about 5 GB of unread data expire and now we cannot get to the valid data with list because it’s stuck churning through the graveyard. We don’t want to delete and recreate the kv store since it’s tied to multiple apis inputting data to them and being read by workers
4 Replies
thomasgauvin
thomasgauvin7mo ago
Are you indicating that list is listing through expired items? Are you using this via the worker binding, wrangler, or the dashboard?
AKStriker98
AKStriker98OP7mo ago
@thomasgauvin, sorry for the late response. The list is listing through expired items and returning a bunch of empty lists of keys with a new pointer. I’ve had this problem before with a logging system and had to do logs based on inverse time labeling to have the new logs appear before any expired logs And this is on worker binding and dashboard. The whole loop for the list doesn’t end because of the sheer amount of expired key-value pairs.
thomasgauvin
thomasgauvin7mo ago
Ok thanks, I'll look into this
AKStriker98
AKStriker98OP7mo ago
Thanks for looking into this, for our use case it’s fine since we do a direct read, but we wanted to put in a system to monitor to make sure if API keys expire again we get alerts on misalignment in the kv and local databases. This is where the listing would be helpful Just wanting to check if you found anything on this as the size of the “graveyard” does make filtering by prefixes in the dashboard relatively slow and have a lot of of the 30 item list only have a couple of entries
Want results from more Discord servers?
Add your server