Nlea
Nlea
Explore posts from servers
CDCloudflare Developers
Created by Nlea on 11/14/2024 in #workers-help
How to access environment variables in Cloudflare Workers when using RPC binding
I'm working with two separate Cloudflare Workers, where Worker A calls a function in Worker B via a binding (RPC). Worker B requires an environment variable to send an email, and this variable should be scoped to Worker B itself, so it doesn't depend on Worker A's environment. Here's the relevant code for Worker B:
export class WorkerEmail extends WorkerEntrypoint {
// Currently, entrypoints without a named handler are not supported

async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/send" && request.method === "POST") {
const { email, firstName }: { email: string; firstName: string } =
await request.json();
return this.send(email, firstName, this.env as Env); // Attempting to pass env here
}
return new Response(null, { status: 404 });
}


async send(email: string, firstName: string, env: Env): Promise<Response> {
const resend = new Resend(env.RESEND_API); // env.RESEND_API should come from Worker B's environment

}
}
export class WorkerEmail extends WorkerEntrypoint {
// Currently, entrypoints without a named handler are not supported

async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/send" && request.method === "POST") {
const { email, firstName }: { email: string; firstName: string } =
await request.json();
return this.send(email, firstName, this.env as Env); // Attempting to pass env here
}
return new Response(null, { status: 404 });
}


async send(email: string, firstName: string, env: Env): Promise<Response> {
const resend = new Resend(env.RESEND_API); // env.RESEND_API should come from Worker B's environment

}
}
When I use HTTP with fetch in Worker A, it works fine:
await c.env.WORKER_EMAIL.fetch(
new Request("https://worker-email/send", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ email, firstName }),
})
);
await c.env.WORKER_EMAIL.fetch(
new Request("https://worker-email/send", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ email, firstName }),
})
);
However, if I want to call Worker B directly via an RPC-style binding (like await c.env.WORKER_EMAIL.send(email, firstName)), I run into issues with accessing the environment variable env.RESEND_API directly in Worker B. It seems the environment is not properly scoped when using RPC and it stays undefined Question: How can I pass the correct environment to Worker B's send function when using RPC? Or is there a better way to structure this to ensure Worker B can access its own environment variables independently of Worker A? What am I missing here? Any guidance would be greatly appreciated!
2 replies
RRailway
Created by Nlea on 7/19/2023 in #✋|help
Connecting prometheus instance with python app via internal network
Hey everyone, I am new to Railway. I deployed a Python App to railway via GitHub repo and Dockerimage and did the same for a Prometheus instance in a separated repo. Both apps are up and running. Within the Python app I have set the variable PORT = 80 because without this setting I would have been not able to access the app. The python app provides an endpoint /metrics form there I want the prometheus instance to poll the metrics. To do so one can configure a prometheus.yml. I managed that Prometheus is connecting to my app via the public domain, but I would like to have it connect via the internal network. I tried multiple things but it is still failing. My prometheus.yml looks like this:
scrape_configs:
- job_name: internal-endpoint
metrics_path: /metrics
static_configs:
# Replace the port with the port your /metrics endpoint is running on
- targets: ['fastapi-on-railway.railway.internal']
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 200ms

- job_name: external-endpoint
metrics_path: /metrics
static_configs:
# Replace the port with the port your /metrics endpoint is running on
- targets: ['fastapi-on-railway-production.up.railway.app:443']
scheme: https
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 200ms
scrape_configs:
- job_name: internal-endpoint
metrics_path: /metrics
static_configs:
# Replace the port with the port your /metrics endpoint is running on
- targets: ['fastapi-on-railway.railway.internal']
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 200ms

- job_name: external-endpoint
metrics_path: /metrics
static_configs:
# Replace the port with the port your /metrics endpoint is running on
- targets: ['fastapi-on-railway-production.up.railway.app:443']
scheme: https
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 200ms
I tried different target options for the internal one including: 0.0.0.0:80 , fastapi-on-railway.railway.internal:80 and basically fastapi-on-railway. If I look into the Prometheus frontend it takes the endpoint but reports that the service is down. Has anyone an idea how to make this connection within the Railway network? This would be amazing! Thank you and cheers Nele
26 replies