If I try and use cross-script calls like in the documentation... WORKER A wrangler.toml ``` [[workf

If I try and use cross-script calls like in the documentation... WORKER A wrangler.toml
[[workflows]]
name = "workflow-a"
binding = "WORKFLOW_A"
class_name = "WorkflowA"

[[workflows]]
name = "workflow-b"
binding = "WORKFLOW_B"
class_name = "WorkflowB"
script_name = "workflow-b"
[[workflows]]
name = "workflow-a"
binding = "WORKFLOW_A"
class_name = "WorkflowA"

[[workflows]]
name = "workflow-b"
binding = "WORKFLOW_B"
class_name = "WorkflowB"
script_name = "workflow-b"
WORKER B wrangler.toml
[[workflows]]
name = "workflow-b"
binding = "WORKFLOW_B"
class_name = "WorkflowB"
[[workflows]]
name = "workflow-b"
binding = "WORKFLOW_B"
class_name = "WorkflowB"
I get the following error (like other users are reporting)... ✘ [ERROR] Worker "workflows:workflow-b"'s binding "USER_WORKFLOW" refers to a service "core:user:workflow-b", but no such service is defined. It's worth noting that USER_WORKFLOW is not a string that exists in my codebase, at all. NOTE: Found a reference to USER_WORKFLOW in the workers-sdk codebase: https://github.com/cloudflare/workers-sdk/blob/99f802591e65a84f509a03a4071030eb0c6c11ba/packages/miniflare/src/plugins/workflows/index.ts#L77
14 Replies
Murder Chicken
Murder ChickenOP3w ago
Both workflows work just fine if invoked on their own. It's when I try to configure one to be called from the other where I run into issues.
ajgeiss0702
ajgeiss07023w ago
is an option for limiting workflow concurrency on the roadmap? I would love to move a bunch of stuff over to workflows, but I don't want to overwhelm some small external apis i call, which is what I currently use queues for
kchro3
kchro33w ago
Hey folks, we continue to see error logs for our workflow that just say "run". if I filter by request ID, I can see that the workflow successfully executed other steps, so I don't know where the "run" error log is coming from.
No description
No description
kchro3
kchro33w ago
Are they supposed to be shown as errors?
Chaika
Chaika3w ago
They're not supposed to, the workflow team as in that msg wants to fix it, but they currently all do same as above ^, as said in the thread "for now I'd just look at the workflows page to see how your instances are doing."
Khafra
Khafra3w ago
Hi, is it possible to set the retry steps after a workflow fails? For example one of my workflow steps calls a third party api where I don't know the ratelimits. I want to fail the step if I'm ratelimited, but then retry at a specific date (the api gives us this date if we're ratelimited). I can think of some ideas like
for (const param of params) {
while (true) {
const response = await step.do('<random name>', { /* options to not retry */ }, ...)

if (response.status === 429) {
await step.sleep('ratelimit expires', Number(response.headers.get('X-RateLimit-Reset')) * 1000)
} else {
break
}
}
}
for (const param of params) {
while (true) {
const response = await step.do('<random name>', { /* options to not retry */ }, ...)

if (response.status === 429) {
await step.sleep('ratelimit expires', Number(response.headers.get('X-RateLimit-Reset')) * 1000)
} else {
break
}
}
}
but I'm wondering if there's anything better? maybe a DurableObject is better in this case
Matt Silverlock
Call step.sleep within the step when you get a rate limiting error? with the argument as the future Date?
Khafra
Khafra3w ago
it never even crossed my mind to have nested steps lol this changes everything
Matt Silverlock
Workflows: it's just code 😉
ajay1495
ajay14953w ago
I have a workflow run that appears to be stuck in "RUNNING" state. Here's what the dashboard shows for the run. Any idea on what might be up? As you can see there's both a timeout and retry for this step, both of which appear to be ignored.
No description
Matt Silverlock
Timeout is per step... with 20160 total retries before it will terminally fail. 2 minute is not the total time for all retries. And by "stuck" - have you attempted to call .stop on the instance? What error do you get (if any) when you call terminate in the dash?
ajgeiss0702
ajgeiss07023w ago
wouldn't the dashboard show if there were other retries though? it looked like that one step's first try had been running for at least an hour based on that timestamp
Unknown User
Unknown User3w ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?