marko.333
marko.333
Explore posts from servers
CDCloudflare Developers
Created by marko.333 on 9/8/2024 in #workers-help
Best practice for local development with multiple workers using RPC bindings?
Hello. I'm having a hard time trying to test multiple Workers locally that collaborate using RPC bindings (and Queues, Durable Objects, etc...)? Can i use wrangler, or do i need to use miniflare? Or am i forced to deploy remotely? Is there any doc or good examples that detail how to accomplish this? Thank you.
2 replies
CDCloudflare Developers
Created by marko.333 on 9/7/2024 in #workers-help
Can an external http client make an RPC call to a Worker?
I would like to externally make RPC calls to Workers (mostly for testing) using something like Postman or curl. I'm hoping some general gateway like this exists...
curl -X POST https://worker-url.com \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "someMethod", "params": {"key1": "value1"}, "id": 1}'
curl -X POST https://worker-url.com \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "someMethod", "params": {"key1": "value1"}, "id": 1}'
My current hack is to implement a wrapper Worker http endpoint for each RPC call. It works, but it at takes time, and it feels ugly/redundant? Any suggestions? Thx.
3 replies
RRunPod
Created by marko.333 on 9/1/2024 in #⛅|pods
How do i deploy a Worker with a Pod?
I have deployed a worker with a Serverless deployment, now i expected to be able to deploy the exact same image to a Pod and be able to have an endpoint URL to make a similar Worker request, but i'm not having success? I am currently using the following as the initial entrypoint for handler.py...
runpod.serverless.start({"handler": handler})
runpod.serverless.start({"handler": handler})
Is there any doc that discusses how to get a Serverless Worker deployed to a Pod? thx.
16 replies
RRunPod
Created by marko.333 on 8/29/2024 in #⛅|pods
Should i able to launch a pod using nvidia/cuda docker images?
I am trying to start a pod using nvidia/cuda:12.6.0-cudnn-runtime-ubuntu24.04 (to get both cuda and cudnn). I'm not an a docker expert, but should that work? The pod appears to start, but the licensing message keeps looping in the logs, and i can't SSH into the pod? Any ideas? thx.
5 replies