WebRTC to HLS, where does the pipeline happen?
Looking at the demo I can't see a "normal" pipeline. I want to process video frames in a pipeline. Do ML to them and stuff. I want the video from the browser. WebRTC should be a reasonable way to stream that. I want to apply a bunch of stuff between input and final HLS output.
Is there a straightforward way for that or is the engine mostly repackaging the video and audio streams?
1 Reply
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View