Workflow hono workers question
Yes, you can add as bindings. Also, you can create an endpoint to manage your workflow by reaching your instance. You can start, pause or resume your workflow via that hono endpoint.
13 Replies
Sorry if Im misunderstanding you, but I did gather that I can call it (to instantiate a new invocation of the workflow) via a binding in my existing worker from the docs.
But I'm wondering if I can define the actual workflow logic (all the step.do stufff) in the same project as my hono stuff?
Yes, create your workflow class and export where you export your hono app.
So just to clarify so Im understanding you correctly. As unfortunately I couldn't find any examples in Hono docs yet like I could for queues/crons.
This is a simplified version of my current export in my hono worker.
would I just add something like
async workflow(params) {}
No you just need to export your workflow class as named export.
Default export > App or as object with fetch key if you need to add schedule or any other service.
Named export > Workflow class (you can create this in another file and export it index.ts)
Ok cool! I'll try that so in essence.
write a workflow class in another file import into my index.ts and deploy my worker as normal.
Out of curisosity any idea why its different to the default export approach of the other services with hono?
Thanks for the advice. Maybe a skill issue but was a bit unclear from docs at the current time 🙂 .
It’s a new product so getting confused is totally normal. I had the same issues that’s why I wanted to share with you 😂
Workflow is not a totaly new product actually. It’s just a wrapper around workers actually. Also, it’s not fully developed at the moment I’m sure it will have better tooling and docs in the near future.
That makes sense yeah I appreciate the guidance
Side question
I currently have a queue workflow for my AI nutrition app.
its a bit like this
- User uploads photo
- Queue job (from worker) + instant feedback on client to say (success now its queueing)
- About a 1 second delay from my cf queue producer to my consumer gets it
-Write to Postgres for job status start (250ms latency, because CF queue doesnt have a job id built in)
- Client polls via websockets getting updates on job status
- Processes image (~5.5-10s, mostly AI API stuff)
- Updates Postgres (250ms latency)
- User gets this via the web socket connection on frontend and I redirects to results of the nutrition data for that meal
Current bottlenecks:
500ms for DB updates
1s queue latency
4-8.5s AI processing
My theory is that workflows could make my code a bit more reliable, and potentially improve latency as it seems to have a instance/job id and status built in so I wouldn't need to go to my postgres db.
Do you know if workflows is suitable for these types of flows where I need to minimise latency (as user is expecting nutrition data for there meal quickly) or if its more suited to background tasks
From reading the blog posts it seems to combine queues/scheduling/durable objects/worker in one product basically!
Yeah this flow is the definition of workflow actually 😀 You will have retry logic for each step as well as pausing/resuming the flow. Also, you can create a logic like ‘human in the flow’. You can run your flow until a certain step and wait for human interaction etc.
Yeah thats great to hear, yeah just wasn't sure!
I guess I wont be able to subscribe to the job status, so my best method would be just to poll it.
I don’t know your stack but if you use something like supabase/convex or any other sync engine you can subscribe to db changes. Other than this yeah you need to pull periodically.
Yeah thats what I do now actually, but then I have this jump of going to my supabase db
i use hyperdrive but its still adding that extra 200-500ms of latency or so. As I still have to write the updates to the table then the user needs to get them on the client.
And the vision models are just kind of slow generally 😓 . So really just trying to shave off whatever I can where I can.
But it does seem like these workflows have built in statuses/ids.
(which queues do not)
Yeah I get it. Just make sure you are trying to optimize the right thing 😀 200ms is not too much to complain about as long as you have fast results from those api. Maybe you can check better solutions for image processing. Anyway I gotta go right know. You can always reach out to me on Discord if you wanna chat. All the best 🤜🤜
Yeah your 100% correct, I am exploring how to reduce the AI time reduction with various techniques whilst retaining accuracy which is the main thing. AI Gateway has been super helpful for that for me to test all models.
nice chatting thanks for the advice!