Taa97
RRunPod
•Created by Taa97 on 8/23/2024 in #⚡|serverless
Understanding RunPod Serverless Pods: Job Execution and Resources Allocation
I'm new to RunPod and need clarification on how serverless pods work. Here's my understanding:
- RunPod serverless pods allow code to run when triggered, eliminating idle costs.
- Code is executed as a job by a worker, accessed through an endpoint.
- I can specify the number of jobs a worker can run.
- on end point setup, I can specify the number of workers for it
Now, I have questions regarding job execution and resources allocation:
If I have three jobs to run simultaneously and an Endpoint is allocated resources to use on its configuration, will these jobs share the resources of the Endpoint (do workers share the same environment) Or each job will run on a separate resource?
I believe that it depends on the setup of the Endpoint. RunPod's serverless pods can scale dynamically based on the workload. When you submit multiple jobs, the system will automatically allocate the required number of workers to execute the jobs send on the emdpoint. Each job will run on its own dedicated resources without sharing with other jobs.
Please correct my understanding and provide insight into how RunPod serverless pods manage job execution and resource allocation.
3 replies