WebRTC stream not working
I've hit an issue trying to get an MPEGTS stream displaying in the browser via WebRTC.
All seems to work, I can add the WebRTC endpoint for the stream, to the RTC Engine, the web client successfully connects, negotiates tracks, handles the offer and sdpanswer etc and I can add the transceiver for the stream to the RTCPeerConnection and get the MediaStream. When I add the MediaStream to the video element srcObject I get a blank screen with a spinner and see a continuous stream of
Didn't receive keyframe for variant:
messages on the server side:
Any clues as to what I'm missing would be greatly received.
For context, I followed the RTC Engine file example and have a main pipeline taking the video output from the MPEGTS demuxer and delivering it to a WebRTC endpoint bin.20 Replies
The first thing I would check is what profile of H264 you have in the MPEGTS file. Our implementation and most browsers support only a baseline profile.
@Radosław, thanks looks like that may be the problem. ffprobe shows it's Main profile:
Ok so try to transcode this file to profile baseline and check if it works in that scenario. If yes you can have two possibilities:
a) Guarantee that your MPEGTS files will be always with h264 in the profile baseline
or
b) add transcoding in your pipeline after MPEG demuxer (you can use this plugin for that https://github.com/membraneframework/membrane_h264_ffmpeg_plugin), but remember that this will significantly increase usage of the CPU.
Transcoding with ffmpeg to get this with ffprobe:
I still get
Didn't receive keyframe for variant: high in 500. Retrying.
.Running ffprobe on the test fixture
video.h264
file that's part of the file source endpoint test (https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/file/test/fixtures) shows that it's profile is High:
So I wonder is it not the profile that's the issue.GitHub
membrane_rtc_engine/file/test/fixtures at master · jellyfish-dev/me...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
I guess I must be doing something dumb with my pipeline. Trying to use
Membrane.Debug.Filter
to inspect the output from the demuxer and I get nothing:
This worked fine when the output was to HLS:
Ok another thing could be a problem is a output alignment, if I remember correctly webrtc require :nal format instead of :au.
You can see it also here
https://github.com/jellyfish-dev/membrane_rtc_engine/blob/07341b4d6c73f41c0101fcd27bb66f0c4abb84db/file/lib/file_source_endpoint.ex#L296
GitHub
membrane_rtc_engine/file/lib/file_source_endpoint.ex at 07341b4d6c7...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
I''m using
:nalu
alignment:
So I don't think it's that. I can only think that my pipeline isn't getting hooked up end to end since the Membrane.Debug.Filter
I've added on the video out of the demuxer isn't showing any data - so i guess there's nothing generating a demand (at least that's what my limited understanding of Membrane is suggesting to me!).Ok I assumed that based on this pipeline provided by you. Also as I looked one again on this pipeline I want to question why do you link it inside of
membrane_webrtc_plugin
/ WebRTCEndpint
?
Instead of using FileEndpoint
?
https://github.com/jellyfish-dev/membrane_rtc_engine/blob/dff42897293d88e0d70c34658473ae7ca31eee6b/file/lib/file_source_endpoint.ex#L66-L82
Here in after_source_transformation
you could specify all required elements that you need.GitHub
membrane_rtc_engine/file/lib/file_source_endpoint.ex at dff42897293...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
And it could work better as maybe your current approach with only
WebRTCEndpoint
or EndpointBin
lacks some messages required by one of this elements.Sorry, I should have been clearer - yes my original H264.Parser was configured with :au alignment. But following your comment I changed it to :nalu and it had no impact.
The application is aimed at interfacing to a uav camera with the video payload delivered as an mpegts stream over udp. I've been using ex_nvr as the model. In particular the main pipeline https://github.com/evercam/ex_nvr/blob/master/apps/ex_nvr/lib/ex_nvr/pipelines/main.ex, the web_rtc bin element https://github.com/evercam/ex_nvr/blob/master/apps/ex_nvr/lib/ex_nvr/pipeline/output/web_rtc.ex and associated stream endpoint https://github.com/evercam/ex_nvr/blob/master/apps/ex_nvr/lib/ex_nvr/pipeline/output/webrtc/stream_endpoint.ex.
It does seem quite complicated. Perhaps, as you suggest, it might be easier to instantiate a room GenServer and add a single stream endpoint.
GitHub
ex_nvr/apps/ex_nvr/lib/ex_nvr/pipeline/output/web_rtc.ex at master ...
Video recording and computer vision for edge devices - evercam/ex_nvr
Thanks for your help btw, very much appreciated!
Ok so maybe we should clarify something, like what protocols do you want to use? Because this example from evercam is pretty complex. If I understand correctly your input is a file with MPEGTS stream and you would like to send it to the browser through WebRTC. In that case IMO the best option for you would be to use
membrane_rtc_engine
where the source would be FileEndpoint (https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/file) and the second endpoint added to engine would be WebRTCEndpoint (https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/webrtc).
If your case is that input is RTSP stream I think you could simply change the FileEndpoint to RTSPEndpoint (https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/rtsp).GitHub
membrane_rtc_engine/file at master · jellyfish-dev/membrane_rtc_eng...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
Ok, I've decided to simplify everything for my benefit. I've cloned the videoroom example that comes with the RTC engine. I'd now like to add a file endpoint when the room is instantiated so that in the browser when I join a room I get a video feed for the file endpoint. My possibly dumb question - where do I add the file endpoint? I've tried to add it at the end of the room GenServer init:
with
create_file_endpoint
defined as:
But I clearly still misunderstand the webrtc process because I then get an error on the client side because it can't find the endpoint when handling the tracksAdded mediaEvent because there isn't an entry in this.idToEndpoint
map for the new endpointId.Here I modified our simple example so video from file is streamed to browser.
https://github.com/jellyfish-dev/membrane_rtc_engine/tree/al_example/examples/webrtc_videoroom
GitHub
membrane_rtc_engine/examples/webrtc_videoroom at al_example · jelly...
Customizable Real-time Communication Engine/SFU library focused on WebRTC. - jellyfish-dev/membrane_rtc_engine
Check if this could help you with something
And if you could adjust it to your project
That's great, thank you. I'd almost got there. I think this will be a great help, next step is to try and modify it to take a udp stream rather than a file - I'll let you know how I get on 🙂
Hi @Radosław I've only just managed to get some time to look at this and am still struggling to get this to work with a ts file. I've stripped it back to a simple variation on the SDL file player example, all works if I play your example file through it.
When I try the ts file I get it all working up to the point I get the
:mpeg_ts_pmt
child notification. Then it stops. I think the issue is that the output pin of the demuxer is flow_control: :manual
so I'm missing something to trigger the flow once the video output is hooked up to the demuxer in the pipeline. This worked fine with the http_adaptive_stream plugin but I guess that's because the SinkBin triggered the flow. Is my intuition here correct?Hi Al
I pushed new commit to this branch (https://github.com/jellyfish-dev/membrane_rtc_engine/tree/al_example/examples/webrtc_videoroom). Now endpoint reads from MPEG-TS file but there are some glitches/some frames are dropped in this stream. I am not sure why is that, you can check on your file if it works, maybe it will.
There are a lot of logs
Not all buffers have been processed
which are from Membrane.MPEG.TS.Demuxer
and I don't know why is that. You can ask the creators of the plugin what does this logs mean. Maybe @varsill will have some idea why there are this instability in frame received.Hi @Radosław, once again thank you!
Your working version helped me figure out why my approach wasn't working - I'd cloned the demuxer repo back in December and tried to upgrade it to core 1.0, not entirely successfully it would appear! I should have checked back with the original and picked up the uplift to core 1.0.