R
RunPod5mo ago
Taa97

Understanding RunPod Serverless Pods: Job Execution and Resources Allocation

I'm new to RunPod and need clarification on how serverless pods work. Here's my understanding: - RunPod serverless pods allow code to run when triggered, eliminating idle costs. - Code is executed as a job by a worker, accessed through an endpoint. - I can specify the number of jobs a worker can run. - on end point setup, I can specify the number of workers for it Now, I have questions regarding job execution and resources allocation: If I have three jobs to run simultaneously and an Endpoint is allocated resources to use on its configuration, will these jobs share the resources of the Endpoint (do workers share the same environment) Or each job will run on a separate resource? I believe that it depends on the setup of the Endpoint. RunPod's serverless pods can scale dynamically based on the workload. When you submit multiple jobs, the system will automatically allocate the required number of workers to execute the jobs send on the emdpoint. Each job will run on its own dedicated resources without sharing with other jobs. Please correct my understanding and provide insight into how RunPod serverless pods manage job execution and resource allocation.
2 Replies
Encyrption
Encyrption5mo ago
Each worker is in it's own environment. The only thing you can share with them is a network volume.
Taa97
Taa97OP5mo ago
Thank you
Want results from more Discord servers?
Add your server