in this case, I think that you need to update the `vitest-pool-workers` package
in this case, I think that you need to update the
vitest-pool-workers
package15 Replies
Savior, that helped. This thing has been moving forward fast!
Okay. thanks @Matt Silverlock
"You can set your own instance ID" - OK that is not clear from the docs and examples. I think all the examples I have seen use crypto to create a unique instance. Also, I like the idea of the workflow triggering the actual order. Didn't think of that. Thanks!
What we tend to do is use a shared UUID for various items and prefix with what the consumer is, e.g.,
workflow-${uuid}
so you can debug more easily too if/when things go southWhere can we make this clearer? Documented here: https://developers.cloudflare.com/workflows/build/workers-api/#create
An ID is automatically generated, but a user-provided ID can be specified (up to 64 characters 1). This can be useful when mapping Workflows to users, merchants or other identifiers in your system.We kept the tutorial simpler (because otherwise we get feedback that we’re introducing too much!) 100% this. Same as you would with Temporal or other async workflow systems.
Yeah, I see it there now. I'm not sure how you can make this clearer. It's easy to miss that. And also, I generally read examples first and learn from that, and don't go back to the docs unless I need something new or have a bug. And all the examples either don't create the instance and let it be done automatically or they use crypto. So that's how I've been doing it. Maybe have an example with an instance ID that is user-provided? Not really sure.
Yeah - maybe we can link out from the guide so it’s mentioned but doesn’t pollute the tutorial / add too much up front
Or a code comment (or both)
Both the workers were using 4.9.1. I just tried updating to the latest (4.10.0) but I still get the same error message
Is there a way to use the chrome debugger inside the workflow steps? I'm unable to hook into it from the main pool, but the worker and DOs work there. Trying to figure out why I'm not getting error logs in a dedicated class that's being called in the step.
Unknown User•3w ago
Message Not Public
Sign In & Join Server To View
The way I'm getting around the limit is popping messages into a queue and then batching 100 messages or every 10 seconds to a workflow. When hitting the ratelimit, the queue will redeliver in the next batch
Although my usecase isn't time sensitive, it's handling user registrations
Unknown User•3w ago
Message Not Public
Sign In & Join Server To View
We can retry them, of course — but workflows are meant to be about durable execution. What I was expecting was a way to trigger multiple distinct workflows transactionally with a single request. That way, if the Cloudflare Workflows API goes down or we hit a quota/limit issue, either all workflows start or none do — instead of ending up in a half-started, inconsistent state.
By having to manage retries ourselves, we’re essentially recreating the very problem that workflows are supposed to solve, aren’t we?
How are you solving this today/elsewhere? What requires these Workflows to be tightly coupled in a way that can’t be solved by using one Workflow vs trying to spread across many?
(To be clear, this doesn’t mean we wouldn’t consider a transactional API, but it’s non-trivial across multiple unrelated Workflows, because that has a performance + complexity cost)
What's the best way to use Queues to limit the concurrency of Workflows? Or would it be just polling the workflow status and using setTimeout to sleep on the consumer before ack'ing the message?