Workers platform broken for requests with cache mode no-store
Hi, an update a few hours ago completely broke my app with the following exception: Error sending request: Unsupported cache mode: no-store. It uses an external library (@azure/cosmos) which apparently specifies this cache options when creating fetch requests. It is known that caching is not supported by Workers, but please make sure the option does not break the requests any more.
15 Replies
Is it calling the Cache API itself? Just tried adding a
no-store
header to a request, and I do not see an error at all.
Can you share a repro?Thanks for the phenomenal response time. Have you added the header or the option like this:
Oh, you want the
cache
field directly on the fetchOption
. That isn't supported on Workers. Can you tell Cosmos not to set a cache directive at all?Interestingly though, it used to work just fine until today, same seems to be the case here: https://github.com/tidbcloud/serverless-js/issues/63
GitHub
Stopped working on Cloudflare Worker · Issue #63 · tidbcloud/server...
I've been encountering an error on Cloudflare Workers that started appearing around May 11. Since this issue is occurring across multiple products, I suspect that it's related to an incompa...
Hm... Let me check...
So funnily enough, this appears to have occurred because they are adding support for the
RequestInit.cache
option, but since it currently doesn't support no-store
, it errors. Before, it would just ignore the no-store
entirely...
Going to make an issue for it now. Do you have an issue I can point to for your Cosmos thing?
Otherwise I can link to this threadThanks, do you mean at https://github.com/cloudflare/workerd ? No they don't support any platforms besides Node unfortunately
Filing an issue for a fix now on
workerd
GitHub
🐛[bug] cache
no-store
errors break existing setups · Issue #2116 ...Followup to #2073 The change to support cache, while nice, has introduced some issues with libraries that already used RequestInit.cache, like @tidbcloud/serverless(issue), and @azure/cosmos(discor...
Not sure if you’ve seen the update. Turns out it has never worked correctly, the other library just checked whether or not it worked before using it
That check broke in the latest update, causing their issues
Reply from the runtime architect:
Oh man. This is like our worst nightmare for trying to maintain compatibility... apps which dynamically probe for the existence of an API and then attempt to use it if it's there. Code paths that were never tested in any capacity suddenly executing in production due to a runtime change. Not blaming the application code, but ugh... I guess this means we have to hide this whole API behind a compat flag.- #2116
We are going to roll back the runtime to the version before this change. It will take a few hours.
in the meantime, can you tell me... what is the impact on affected apps? Are they serving 1101 error pages? Or are they catching the exception and doing something else?
I ask because I'm trying to figure out how we'd detect this in an automated way. We have some code in place to monitor total number of exceptions thrown by a worker... but in this case, even when the code is working correctly it throws an exception and catches it, so I'm wondering what signal we could use to detect when it is broken.
OP in this thread was just receiving the errors directly from the Worker as far as I can tell. The other package,
@tidbcloud/serverless
would attempt to build a Request
with cache
, and then if it threw an error, send a fetch without cache
. Not sure how much there is to detect in that case
Other than making the Request constructor throw the same error as fetch, until support for the different directives land?I understand the core bug. I'm asking what the behavior of the affected apps is after the error occurs -- because I'm looking for signals we could be monitoring for in the future, to detect when we've broken someone.
does anyone have a link to an actual app that is currently failing in production?
Oh, sorry, misunderstood. I don't have any Workers using affected packages, though I can ask if anyone is willing to share in the tdb issue if you would like?
GitHub
Stopped working on Cloudflare Worker · Issue #63 · tidbcloud/server...
I've been encountering an error on Cloudflare Workers that started appearing around May 11. Since this issue is occurring across multiple products, I suspect that it's related to an incompa...
the rollback is ~80% complete, should be done in the next hour or so.
@kenton I can confirm it's 1101, in our case the app is used internally only and didn't serve any requests for a few days