Murder Chicken
Murder Chicken
CDCloudflare Developers
Created by Murder Chicken on 10/29/2024 in #workers-help
Static Assets HTML 304 local but 200 only when deployed
I'm trying to experiment with the advantages of building out a static site through static assets and pre-compiled HTML. Locally, when I serve up a page, I see an initial 200 response with a 304 + ETAGS, as expected, on reload. However, when I deploy this bare-bones worker and load it up, I only see 200 responses and it never sets ETAGS to take advantage of 304 responses. Also, when developing locally, if you enable assets in wrangler, it seems that no console logs get executed... at all. There seems to be some kind of aggressive caching happening locally. In fact, if I delete the entire fetch handler and restart the worker, it loads from cache even on a hard reload. Is this normal?
10 replies
CDCloudflare Developers
Created by Murder Chicken on 9/27/2024 in #workers-help
RPC, `using`, Experimental Wrangler and Observability
When making RPC calls through a Service Binding, it's recommended (https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/#how-to-use-the-using-declaration-in-your-worker) that we employ the using keyword to enable automatic cleanup. To use this, we're forced to use the wrangler@using-keyword-experimental version so that the using keyword is transpiled properly upon deployment. Unfortunately, this version of wrangler seems to be incompatible with the new observability setting that's available to us in wrangler 3.78.6. Is there another way to handle RPC call cleanup without using the using keyword so that we can not bump into incompatibility with new features that are released?
9 replies
CDCloudflare Developers
Created by Murder Chicken on 9/24/2024 in #workers-help
Serialized RPC arguments or return values are limited to 1MiB
When trying to move to a Service Binding oriented architecture between workers, I bumped into the following error: Serialized RPC arguments or return values are limited to 1MiB This is happening when trying to transmit a large dataset through a service binding to the worker that handles DB operations. The array of data (trimmed as much as possible to an array of arrays) is just over 1 MB in size. It feels strange that I may need to break this data up into multiple RPC calls to get past this limitation when doing it over HTTP worked just fine. 1. Is it possible to up this limit for a worker? 2. Do I have other options? Thanks!
5 replies
CDCloudflare Developers
Created by Murder Chicken on 9/9/2024 in #workers-help
Tail Workers and Additional Event Data
Is it possible to add additional data to events passed to a Tail Worker? For instance, I have multiple workers stitched together with a unique UUID. For instance, a scheduled worker kicks off and generates a UUID and sends messages to a queue consumer with the UUID as part of the message body. That queue also generates console.log messages. So, I've got two workers that are dealing with a single UUID and I'd like to use that data in my Tail Worker. Is there a way to add the UUID to something that will have it bubble up into the TailItems object? Otherwise, I need to add the UUID into the console.log messages and parse them out. It would be nice if extra metadata could be added to the TailItems's event node. Is this possible in any way?
1 replies