Custom RTC Endpoint
Sorry, more random questions!
What's the best place to start when trying to get a custom endpoint to stream mpegts video in real-time to the browser?
I have a working pipeline for mpegts in a UDP stream rendering in the browser via HLS - basically UDP -> MpegTS -> Demuxer -> H264 Parser -> HttpAdaptiveStream. My application is real-time so the HLS lag is too much so WebRTC seems like the way to go.
I've had a look at the RTSP Endpoint and am trying to create an equivalent for MpegTS received via UDP but have hit the limit of my understanding. I add my endpoint to the RTC Engine and it initialises, connects to the UDP stream, gets the MpegTS PMT and I can happily demux and parse the H264 video and wrap it up with RTP. I'm guessing that I need to wait for handle_pad_added before finalising the pipeline to feed that out via a TrackSender. My question is does that handle_pad_added callback only get called if I connect the browser to the associated peer?
3 Replies
Hi @Al
Take a look at https://hexdocs.pm/membrane_rtc_engine/track_lifecycle.html and https://github.com/jellyfish-dev/membrane_rtc_engine/blob/aab2226db15592f7bb5eea51b2205bdc6d8efd0a/file/lib/file_source_endpoint.ex
GitHub
membrane_rtc_engine/file/lib/file_source_endpoint.ex at aab2226db15...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
handle_pad_added(:output) is called when you mark track as ready