๐Ÿ‘‹๐Ÿป Hey all. I am a PM on the Workers

๐Ÿ‘‹๐Ÿป Hey all. I am a PM on the Workers team. I'm doing some exploration into ways that people might want to use container-based workloads alongside Workers. If you are somebody who has wanted to use a container with Workers in some way, or has strong opinions on the topic, I'd love to chat! (Again, this is just an early exploration, so please don't get hopes up around something imminent!) - If you want to chat, feel free to book a time: https://calendar.app.google/qS8Dr4L2L2bWQ6726, or if you don't have time to chat, but have some use case for this that you'd love to see, feel free to drop it in a thread on this message. Thanks!
30 Replies
meakr
meakrโ€ข5mo ago
Starting the thread ๐Ÿงต
Unknown User
Unknown Userโ€ข5mo ago
Message Not Public
Sign In & Join Server To View
johtso
johtsoโ€ข5mo ago
Not sure if I'm fully understanding what you mean by container based workloads, but maybe it would enable deploying a restate.dev server alongside services.. or is the idea it wouldn't be long running?
Unknown User
Unknown Userโ€ข5mo ago
Message Not Public
Sign In & Join Server To View
meakr
meakrโ€ข5mo ago
@johtso re "what you mean by container based workloads" -> I purposely kept it a bit vague as I want to get as broad a sense as possible of what people would want. I'd say really it would be anything where you bring your own image and we run it on your behalf. That could mean short or long running, stateless/stateful, one-off containers or autoscaling groups, etc, etc. But probably the main consistent behavior is brining ones own image. @peshakoo is the main issue w using FFMPEG now the bandwidth costs, or latency, or the pain to set up an external system, or something completely different? - This seems like a good canonical use case of why container workloads might be nice. - Part of the reason I'm asking, is I could see an FFMPEG container being spun up right next to a worker on the same machine, or I could see something like a transcoding service with long lived containers. Wondering if either would be viable.
johtso
johtsoโ€ข5mo ago
Well, if I could deploy a long running docker image that exposed a few ports that would be pretty rad
meakr
meakrโ€ข5mo ago
noted! ๐Ÿ™‚
Unknown User
Unknown Userโ€ข5mo ago
Message Not Public
Sign In & Join Server To View
meakr
meakrโ€ข5mo ago
Yeah, posting on a Friday afternoon probably didn't help either. I'll definitely bump this tomorrow (and probably a few more times) in a more public spot
MissS
MissSโ€ข5mo ago
If we're able to use containers, then we could spin up fully supported Node enviornments without needing to worry about compat support. The same could be said for other languages like python, which technically are support by cloudflare but I can't honestly recommend people to use because of similar reasons Using cloudflare for non-trivial projects requires always having a second cloud because inveitebly, there will be something that just doesn't fit "inside" the worker ecosystem
meakr
meakrโ€ข5mo ago
@MissS yeah that would be an option. Workers are going to be inherently more efficient than full containers, so my hope is that the compat story gets a lot better there and for JS/Python the container wouldn't be needed (better for us cost-wise, better for you price-wise). But of course if its BYOImage, you could put whatever you want in there. BTW, I feel compelled to shill for node_compat_v2 which is in experimental mode now and should at least make our current compat story better. ๐Ÿคž๐Ÿป Totally hear you on having a second cloud though! We hear that a ton
Ottomated
Ottomatedโ€ข5mo ago
Being able to write a TCP or WebSocket server using native code, with the same autoscaling experience as workers would be the dream use case
RZ
RZโ€ข5mo ago
Would like to have custom CLI tools (executable binaries) installed and available on Workers
hegdedarsh
hegdedarshโ€ข4mo ago
Use Case: Edge-Powered Personalized Content with Containers
Scenario:
You have a blog or news website that serves content to a global audience. You want to personalize content based on the user's location and serve dynamic data from a containerized backend service.
Key Components:
Cloudflare Workers: Execute logic at the edge. Cloudflare Pages: Host static content of the website. Cloudflare KV Storage: Store and retrieve user-specific or region-specific data. Cloudflare Containers: Run backend services. Cloudflare CDN: Cache and deliver static assets globally.
Containerized Backend Service (Hypothetical Cloudflare Containers):

Run a containerized service that handles more complex personalized data, such as user profiles or dynamic recommendations.
Assume Cloudflare provides a managed container service that integrates seamlessly with Workers.

Deploy and Configure the Containerized Service:

Deploy the containerized backend service using the hypothetical Cloudflare Containers.
The service could be an API that provides user-specific or location-specific data, such as recommendations, recent activity, or personalized offers.
Containerized Backend Service (Hypothetical Cloudflare Containers):

Run a containerized service that handles more complex personalized data, such as user profiles or dynamic recommendations.
Assume Cloudflare provides a managed container service that integrates seamlessly with Workers.

Deploy and Configure the Containerized Service:

Deploy the containerized backend service using the hypothetical Cloudflare Containers.
The service could be an API that provides user-specific or location-specific data, such as recommendations, recent activity, or personalized offers.
i would prefer k3s , with more usecases at the edge if CF decides to bring in a container based platform would focus primarily on below usecases Edge Homelab Internet of Things (IoT) Continuous Integration (CI) Development (ARM)
Mozzy
Mozzyโ€ข4mo ago
I have long dreamed of hosting Node-based apps on Cloudflare. The most notable package I've been missing is some sort of canvas package. I know browsing exists
meakr
meakrโ€ข4mo ago
Hey @Mozzy, I'm curious, what are you hitting with the node app that doesn't work on Workers. We just merged an intiial node_compat_v2 that closes some of the gaps - https://github.com/cloudflare/workerd/pull/2147 Note, you'll need to enable it with a couple flags for now**
Mozzy
Mozzyโ€ข4mo ago
Specifically a canvas package: https://github.com/Automattic/node-canvas https://www.npmjs.com/package/@napi-rs/canvas I am not personally familiar with what exactly the limits are to have that work on Workers
Jikyu
Jikyuโ€ข4mo ago
We currently run all ETL related tasks through FLY.IO, so would love something like this on Cloudflare platform. (ETL tasks can take a ton of memory / disk space)
Unknown User
Unknown Userโ€ข4mo ago
Message Not Public
Sign In & Join Server To View
Chris Wilson
Chris Wilsonโ€ข4mo ago
Would love to be able to run Elixir/Erlang/BEAM workloads on Cloudflare, so if you can come up with something that's comparable to what fly.io offer in terms of cost and ease of use, but with straightforward connectivity to D1 etc. without having to leave the CF network, that'd be awesome.
jjjrmy
jjjrmyโ€ข4mo ago
what flags need to be used to get this to work?
johtso
johtsoโ€ข4mo ago
+1 to this, Erlang on cloudflare would be lovely
Yassin unnug.com
Yassin unnug.comโ€ข4mo ago
We are building analytics and streaming on top of cloud flare and this would be super awesome to finally have containers available. I think my biggest question is how would auto scaling work ? is that going to be left for us to decide ?
meakr
meakrโ€ข4mo ago
@Yassin unnug.com, that's one of my open questions. In an ideal world, we could just scale for you and you aren't even specifying a scaling strategy. It could "just work", but I'm not sure if 1) that's easily doable and 2) end-users actually want that. I've seen similar products allow you to target a CPU or memory usage %, or RSP or concurrent requests. That seems like a viable option too. - Do you have a specific strategy you would want?
meakr
meakrโ€ข4mo ago
Use "experimental:nodejs_compat_v2" in compatibility_flags and remove any other node compat references in wrangler.toml Docs PR is up but not merged. This link has info for now: https://bib-unenv.cloudflare-docs-7ou.pages.dev/workers/runtime-apis/nodejs/ Want to DM me if that solves/doesnt solve the issue? - That way we can keep this thread a bit cleaner?
Cloudflare Docs
Node.js compatibility ยท Cloudflare Workers docs
Node.js APIs available in Cloudflare Workers
jjjrmy
jjjrmyโ€ข4mo ago
@meakr messaged you
Yassin unnug.com
Yassin unnug.comโ€ข4mo ago
1) i would love to not worry about autoscaling. anything similar to what you have now for non containers would be super nice 2) as long as the pricing is on a per request as it is now (assuming the container only receives http requests) i think that would be cool. is the pricing going to be significantly different ? can you consider different type of pricing for workers that only do http requests are stateless and that are limited in time and resources ? as opposed to workers that do long processing like video encoders and what not ? this would allow us to run go/rust based workers that are still priced the same as the ts/js workers .. hopefully ๐Ÿ™‚
Unknown User
Unknown Userโ€ข4mo ago
Message Not Public
Sign In & Join Server To View
an
anโ€ข4mo ago
I'm running rendering image such as librsvg/resvg for rendering png, but it is not wasm available/ consume than the limitation of worker (128mb but requires ~200mb), would love to use all-in-one cloudflare stack, not an additional server that requires roundtrip
Erwin
Erwinโ€ข3mo ago
There is probably a ton you could do with the Browser Rendering API and canvas for those kind of use-cases. For my (pre-Cloudflare) startup I had to write a ton of code to create PDF invoices. If I had to do that now I would just use the Browser Rendering API for that. @meakr, apologies for resurrecting the thread, but another use-case I had in my (pre-Cloudflare) startup was running customer builds. A long(er) running task that required access to a large-ish filesystem (1-2 GB). But the end results of the task were moved off disk, so the entire tasks was ephemeral. Fun fact about that setup is that we may have been one of the first DOs in production, because we wrote one that could stream the output of that build via a DO to all listeners.
Want results from more Discord servers?
Add your server