"internal error" calling .fetch on bound Durable Object
I'm getting this thrown when trying to call DO fetch. I think my worker is configured incorrectly.
My worker code is very simple:
What are some things to check in my worker settings to debug this?
11 Replies
I am manually deploying this worker via the API, not wrangler.
here are my worker settings as reported from the API:
gotten from
curl https://api.cloudflare.com/client/v4/accounts/8d093faf5772cff838a12d1c9bc87afd/workers/scripts/agent-worker-afb1bb7b-233a-4d2d-9999-35914f25f673/settings
does this have something to do with namespace_id
on the DO? that's the only difference I see between this and a working worker with DO binding
@Walshy @Hello, Iām Allie! any ideas here? sorry to ping you but you seem knowledgeable about the stack internals
what's the account id?
8d093faf5772cff838a12d1c9bc87afd
and Zone ID is a1f842607bf1d6725b1054e4185a14ff
I have a few workers deployed there, but all of the recent ones suffer from this configuration problem
interestingly it seems to not be a durable object
I've never seen that error before
i'm gonna escalate this so we can track it internally
thanks, please keep me updated, it's a blocker to shipping š
of course!
hi there - could you provide any more details about the metadata in the script upload? specifically how you configured the binding or any migrations in the upload?
any details on how you configured the worker, through PUT uploads or PATCH updates would be helpful
This is what I have in my list of migrations in
wrangler.toml
:
I tried both via the API and wrangler. The support team says that the problem is that this is not configured as a durable object -- but my impression is the above is the correct way to configure a durable object.
There are no errors other than the "internal error".
the class is exported:
Note that this all works in wrangler dev
Also, I know my migrations are taking effect because if I put the wrong class name in there it will cause an error:
gotcha, taking a harder look at this today
looks to me like this is a bug specifically triggered when you add a tail consumer of the same worker - can you confirm that this is goes away if you remove the tail consumer?