membrane_webrtc_plugin: %Membrane.Buffer with pts: nil, dts: nil received from audio track.

Hello, and thank you for the great ecosystem of libraries! 🙏 Currently, I'm working on SFU, which will utilize the membrane_webrtc_plugin to connect the streamer and the viewers. Everything is working pretty fine with video track. However, when I'm adding the audio one I'm starting to receive a bunch of errors, namely: 1) ArgumentError from membrane_realtimer_plugin, handle_buffer/4 function where it essentially tries to do subtraction from nil:
interval = Buffer.get_dts_or_pts(buffer) - state.previous_timestamp
interval = Buffer.get_dts_or_pts(buffer) - state.previous_timestamp
2) membrane_webrtc_plugin itself in sink.ex, handle_buffer fails for the exact reason, when the both dts and pts are nil. So my question is: is it a bug, or I'm doing something wrong with the pipeline? When I apply fixes to both problems the pipeline seems to be working fine. The example of the audio spec:
[
child(:webrtc_sink, %WebRTC.Sink{
signaling: %Membrane.WebRTC.SignalingChannel{pid: signaling_channel},
tracks: [:audio, :video],
video_codec: :h264
}),
get_child(:source_bin_audio_tee)
|> child(:audio_parser, %Membrane.AAC.Parser{
out_encapsulation: :ADTS
})
|> child(:aac_decoder, Membrane.AAC.FDK.Decoder)
|> child(:converter, %Membrane.FFmpeg.SWResample.Converter{
output_stream_format: %Membrane.RawAudio{
sample_format: @audio_sample_format,
sample_rate: @audio_sample_rate,
channels: 2
}
})
|> child(:encoder, %Membrane.Opus.Encoder{
application: :audio,
input_stream_format: %Membrane.RawAudio{
channels: 2,
sample_format: @audio_sample_format,
sample_rate: @audio_sample_rate
}
})
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep})
|> get_child(:audio_tee)
]

# Then eventually
spec ++ [
get_child(:audio_tee)
|> via_in(Pad.ref(:input, :audio_track), options: [kind: :audio])
|> get_child(:webrtc_sink)
]
[
child(:webrtc_sink, %WebRTC.Sink{
signaling: %Membrane.WebRTC.SignalingChannel{pid: signaling_channel},
tracks: [:audio, :video],
video_codec: :h264
}),
get_child(:source_bin_audio_tee)
|> child(:audio_parser, %Membrane.AAC.Parser{
out_encapsulation: :ADTS
})
|> child(:aac_decoder, Membrane.AAC.FDK.Decoder)
|> child(:converter, %Membrane.FFmpeg.SWResample.Converter{
output_stream_format: %Membrane.RawAudio{
sample_format: @audio_sample_format,
sample_rate: @audio_sample_rate,
channels: 2
}
})
|> child(:encoder, %Membrane.Opus.Encoder{
application: :audio,
input_stream_format: %Membrane.RawAudio{
channels: 2,
sample_format: @audio_sample_format,
sample_rate: @audio_sample_rate
}
})
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep})
|> get_child(:audio_tee)
]

# Then eventually
spec ++ [
get_child(:audio_tee)
|> via_in(Pad.ref(:input, :audio_track), options: [kind: :audio])
|> get_child(:webrtc_sink)
]
Looking forward to submit my local fixes if it's relevant. Otherwise hope to hear from you on that matter!
5 Replies
odingrail
odingrailOP6mo ago
The proposed solution to the problem (membrane_webrtc_plugin):
odingrail
odingrailOP6mo ago
No description
odingrail
odingrailOP6mo ago
membrane_realtimer_plugin:
No description
Feliks
Feliks6mo ago
Change
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep})
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep})
on
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep, generate_best_effort_timestamps?: true})
|> child(:parser, %Membrane.Opus.Parser{delimitation: :keep, generate_best_effort_timestamps?: true})
It should solve your problem
odingrail
odingrailOP6mo ago
It seems like it does resolve the issue. Thank you a lot @Feliks !
Want results from more Discord servers?
Add your server