At this stage - not looking for anything particularly exotic - a couple of hello worlds plus a bit o
At this stage - not looking for anything particularly exotic - a couple of hello worlds plus a bit of branched logic using KV store would suffice for now... just to understand how the capn proto config maps to workerd. Also, I'm unclear how workerd routes incoming requests to specific requests. The cap 'n proto example configs in the gh repo are all about binding to an ip:port whereas inbound requests to workerd are pretty much only going to be distinguishable based on their URI or URI path. I did not see anything about routing or pathnames in the schema - however I may have missed something.
13 Replies
In an ideal world - wrangler could be coaxed into generating some output code + a capn proto config file. I do not want to use wranlger dev for production - as I'm not sure exactly what features it's enabling/disabling to facilitate a good developer experience. Perhaps miniflare is the right way to go, but my understanding is this has now been superseded by workerd.
Apologies if my question is dumb - but while so much of CF's documentation is pretty decent - the workerd (admittedly in beta) stuff is opaque - to me at least.
Unknown User•13mo ago
Message Not Public
Sign In & Join Server To View
Im really happy to see some actual workerd activity in this channel now : ) is there any way someone could share a bit more how workerd is actually best used for a productive setup? as far as i understand the best practice is to use something linke haproxy for ssl termination and load balancing and one workerd process per processor core (all being identical with all the existing workers). each worker is locked down without network access but has to do inbound traffic with unix sockets that are conected via haproxy and outbound via an egress worker that can do whitelisting etc.
what i dont undertsand is how would you be able to deploy a single new worker into this setup. it would always have to go through a central entity that knows about all other workers and recompile the workerd binaries with the existing ones + the new one or is there a capnp rpc system that allows to publish a new worker into a running workerd instance?
what i dont undertsand is how would you be able to deploy a single new worker into this setup. it would always have to go through a central entity that knows about all other workers and recompile the workerd binaries with the existing ones + the new one or is there a capnp rpc system that allows to publish a new worker into a running workerd instance?
You'd have to restart workerd with the new capnp config
It'll terminate gracefully (i.e let inflight requests be processed) so you launch a new process, move new traffic to that one and kill the old one
thanks, ok so i will always need something like a supervisor deamon that knows about all projects and adds workers together, recompiles and then switches over the workerd processes.
You can create the capnp config programmatically with the capnproto libraries for a given language which should make it easier
yeah and i can also have a main cpanp file and then import all the "projects" as their own capnp file wich makes things much cleaner
There’s a TS example at https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare/src/runtime/config and Rust at https://github.com/KianNH/capnproto-rust-workerd-configs/blob/master/src/main.rs
Miniflare should have a lot of good references for running workerd
can you use cache api as a key value storage? if so, what are the limits/pricing ?
#workers-discussions would be a better fit but not really.
I mean, can you? Yes - but there is no guarantee you'll hit the same colo, or that the colo still has that file in cache.
so cache is not consistent
Cache is consistent, but is not globally consistent (if you hit another cloudflare datacenter there will be a different cache instance) and may have data evicted at any time if it needs to make room for other content.
So not a good idea for key-value storage.
ok sure but there are no limits for cache?