Newbie question
Hello, I would like to try something like a video transcoding server on demand for some projects.
I would like to point to a video link, the server get the file, transcode it using ffmpeg and hadware compression, and push it back to a AWS storage (Cloudflare R2).
Would it be possible ? Any advice to where to start from to try to do that ?
Thanks a lot in advance.
16 Replies
This is possible but you probably don't need a GPU for ffmpeg transcoding and a normal CPU server should be fine for that.
Yes. I agree with ashelyk tho. Something like fly.ion is decent for that. Or when they push our cpu servers
unless u got more advance graphics card accelerated transcoding
Where to start is start with pytorch template
install the dependencies
test out the flow end to end
and then make it into a docker container
https://discord.com/channels/912829806415085598/1194695853026328626
Thank you both for your answers that's a good start for me. To @ashleyk the thing is GPU have hevc hardware transcoding acceleration capabilities that speed up a lot the process (x70-x100) and it is key for my applications. I don't know if you can choose the hadware used in serverless (for example RTX 4000 has hardware transcoding accel capabilities). Where I am a little confused is I know how to create Docker Images, it seems mandatory to use python / pytorch to work with RunPod ? For example can I have a Docker image based on alpine with ffmpeg installed and running shell commands ? Thanks
U can sort of choose hardware ex in screenshot.
You don't need pytorch, you do need python for serverless since it uses a Python file to execute your stuff. You can subprocess the ffmpeg, or use python ffmpeg is what I do for my ffmpeg things.
Or you can use python to trigger a bash script
Hi @justin , i already have a Pod containing docker image that i have configured for my ComfyUI workflow. i want to convert my existing pod into serverless. is it possible ? how ?
You can't convert your existing pod into serverless, you need a docker image for serverless and it needs a RunPod handler.
Here are some resources to get started with RunPod serverless:
https://blog.runpod.io/serverless-create-a-basic-api/
https://www.youtube.com/@generativelabs/videos
https://trapdoor.cloud/getting-started-with-runpod-serverless/
If u have a gpu pod working all programmatically, like u can make things work with a python handler file as ashelyk showed, then all u need is to rebuild the docker file to call the handler.py.
https://discord.com/channels/912829806415085598/1194695853026328626/1194695853026328626
Such as how the above^ shows u can use a gpu pod as a basis > and then just make another docker that just changes the cmd command and calls a new handler.py
Hi, I have a finetuned whisper model .bin file. i want to deploy it as serverless. can any one please guide the process, it is working in my local python environment. also i do not know how to get the audio file from client, hence, any tutorial ( video / article ) will be helpful.
Can read my how to construct a dockerfile resources that i linked above
Can also refer to how I did my whisperx setup. This is not exactly whisper but u can use as reference. I didnt follow my how to setup a dockerfile resources for this repo bc when i made this i was very new to runpod and was just trying to build a serverless template from scratch
https://discord.com/channels/912829806415085598/1194700289123549245
Hi justin
i want to report an error
i am creating a pod with docker community ghcr.io/ai-dock/comfyui:latest-jupyter
it installs and when i connect it is continuously giving error
mamba sync..
can you please guide how to fix it, previously it was working, but today when i started my POD it was not working, i created a new one and it is not working
1. This is not Serverless question.
2. Please don't hijack other people's threads to ask a new question.
3. @RobBalla made this template not @justin .
Ok
Sorry and thanks