agentpietrucha
agentpietrucha
RRunPod
Created by EMPZ on 9/12/2024 in #⚡|serverless
Very slow upload speeds from serverless workers
The only idea I am having right now would be to check if your workers have slow network for all requests or only for your buckets. Or maybe your docker image has some issues? I am using "nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu22.04" with no issues
13 replies
RRunPod
Created by EMPZ on 9/12/2024 in #⚡|serverless
Very slow upload speeds from serverless workers
As far as I know, the volumes from runpod aren't the best solution for such case. They work best for storing "static" files like checkpoints or bigger models. I am personally using storj for storing processing results for a small period of time
13 replies
RRunPod
Created by EMPZ on 9/12/2024 in #⚡|serverless
Very slow upload speeds from serverless workers
How are you downloading the files? I am downloading & uploading images using presigned urls with just requests and I am getting around 2 secs for both
13 replies
RRunPod
Created by agentpietrucha on 6/12/2024 in #⚡|serverless
Video processing
Got it, thanks
13 replies
RRunPod
Created by agentpietrucha on 6/12/2024 in #⚡|serverless
Video processing
For example, cv2.VideoCapture requires that the file to be on filesystem to read it. Downloading video, saving saving it to file system and then reading frame by frame is a good approach? Eg imageio (https://pypi.org/project/imageio/) allows to read videos from bytes (actually I don't know it's implementation, maybe it is using filesystem along the way) So I'd probably ask whether should I use filesystem or in memory for videos
13 replies
RRunPod
Created by Raqqa on 5/1/2024 in #⚡|serverless
Efficient way to load the model
As Papa mentioned, it looks like your worker is downloading/loading some files. @nayandhabarde your logs (start container, stop container...) are typical logs of starting a container When I gave your screenshot a second look it seems to me that you may have some bug in your implementation. Share your code, or better fragment if possible. Only then I/someone else will be able to help you better
10 replies
RRunPod
Created by agentpietrucha on 5/12/2024 in #⚡|serverless
Errors while downloading image from s3 using presigned urls
Thanks, I am checking it out!
9 replies
RRunPod
Created by agentpietrucha on 5/12/2024 in #⚡|serverless
Errors while downloading image from s3 using presigned urls
what is rp_download? I couldn't find anything related to python
9 replies
RRunPod
Created by agentpietrucha on 5/12/2024 in #⚡|serverless
Errors while downloading image from s3 using presigned urls
And most of the time it works. But sometimes it doesn't
9 replies
RRunPod
Created by agentpietrucha on 5/12/2024 in #⚡|serverless
Errors while downloading image from s3 using presigned urls
what do you mean exactly? I am using the following to get presigned url: boto_client.generate_presigned_url( "get_object", Params={ "Bucket": bucket, "Key": key, }, ExpiresIn=36000, # 10 hours )
9 replies
RRunPod
Created by houmie on 5/1/2024 in #⚡|serverless
When using vLLM on OpenAI endpoint, what is the point of runsync/run?
I haven’t used the openai endpoint yet unfortunately:(. Let’s wait for somebody else to jump in here
12 replies
RRunPod
Created by houmie on 5/1/2024 in #⚡|serverless
When using vLLM on OpenAI endpoint, what is the point of runsync/run?
Runsync is a synchronous way of hitting your endpoint. The http query will wait (as long as it doesn't timeout first) until your worker returns a response. Eg.: If you add worker call to your api then you can just wait for the response: const response = await fetch('runpod/worker/url'/**runsync**) const result = await response.json() When you use the /run endpoint then runpod is returning you job status and id. Having the id you'd have to setup some kind of a worker to periodically check for results of your job. Both endpoints serve different purposes. Hope that veeery high level overview will help you a little
12 replies
RRunPod
Created by Raqqa on 5/1/2024 in #⚡|serverless
Efficient way to load the model
You should also load your model outside of your handler function. Here it is mentioned: https://arc.net/l/quote/jvkbeogj. Then you won't be loading your model again and again on every new request. You will only load the model once the worker starts. Doing this helped me speed up my worker a looot Something like this: model = Your_Model() runpod.serverless.start({"handler": handler(model)})
10 replies
RRunPod
Created by Harish Natarajan on 4/30/2024 in #⚡|serverless
Runpod doesn't work with GCP artifact registyr
Maybe you made it private? I had similar "mistake"
7 replies
RRunPod
Created by agentpietrucha on 4/30/2024 in #⚡|serverless
Using network volume with serverless
You’re right probably. You reminded me about the webp conversion. I will give it a second thought. Thanks!
45 replies
RRunPod
Created by agentpietrucha on 4/30/2024 in #⚡|serverless
Using network volume with serverless
Yeah, I’ve been thinking about webp conversion. But haven’t done anything about it yet. Problem is within the details. Main app which makes use of runpod is desktop app, which may be run on different pc specs. So file conversion could be even worse than uploading extra MB
45 replies
RRunPod
Created by agentpietrucha on 4/30/2024 in #⚡|serverless
Using network volume with serverless
Input is either jpeg or png. Output is png
45 replies
RRunPod
Created by agentpietrucha on 4/30/2024 in #⚡|serverless
Using network volume with serverless
Thanks guys for your help and suggestions and opinion on runpod network storage ✌🏻. I guess I will stick with standard s3 for my case. I’m handling images, so db won’t be suitable for me I guess
45 replies
RRunPod
Created by agentpietrucha on 4/30/2024 in #⚡|serverless
Using network volume with serverless
I send a lot of small files (from 1 to 5 MB each). But I send them frequently, let's say 1000/hour. In your opinion the parallel download/upload would be suitable for such case? What do you think? Or maybe little overkill?
45 replies
RRunPod
Created by ribbit on 4/30/2024 in #⚡|serverless
How do I handle both streaming and non-streaming request in a serverless pod?
Nice way to run the same handler in both pod and serverless!
67 replies