Pipeline with RTMP source and AAC decoder
Hi everyone ๐
(message can be redundant with the one I posted on slack, sorry about that)
I'm currently building a POC for live transcription using whisper model.
I checked Lawik example already which uses either the mic as or a file as a source to achieve it.
In my case I do have RTMP source with AAC codec for audio and flv format.
I've built a pipeline which seems to be "ok":
When I start it it initialise without issues however it doesn't do anything (from what I can see)
(I see debug logs showing elements received the play request)
I would be happy to get advises on that one :D, thanks !
9 Replies
You are using the fake sink at the end of your pipeline. This sink just makes demand on its input (the converter) and drops the buffers. You need to use a sink which actually makes some output -- consider the file sink.
Got the same result with this:
It seems as if the buffer weren't flowing through the pipeline at all. Are there any logs that indicate, that the Source Bin is processing the packets?
I think I found my issue, I got it working by not using the Membrane.RTMP.SourceBin, but instead use Membrane.RTMP.Source, doing it that way:
Somehow the extra part:
|> bin_output(:audio),
that is part of the SourceBin was causing issues. Not sure exactly why, still a noob with Membrane ๐It's because of an implicit naming of the pads. Typically, pads will take in input on a pad named
:input
and output on pads named :output
. When this is the case, you don't need to specify a via_{in,out}
between elements when building the pipeline.
We can see this is the case for the RTMP source and the FLV demuxer:
https://github.com/membraneframework/membrane_rtmp_plugin/blob/v0.11.2/lib/membrane_rtmp_plugin/rtmp/source/source.ex#L20
https://github.com/membraneframework/membrane_flv_plugin/blob/master/lib/membrane_flv_plugin/demuxer.ex#L40
However, notice that the FLV demuxer defines two output pads: :audio
and :video
. This is to be expected since demuxing is the process of splitting the stream apart into its audio and video components.
This is why you must have the via_out
between the demuxer and the AAC parser -- you need to specify the parser will get input from the :audio
pad of the previous element.GitHub
membrane_rtmp_plugin/source.ex at v0.11.2 ยท membraneframework/membr...
Contribute to membraneframework/membrane_rtmp_plugin development by creating an account on GitHub.
GitHub
membrane_flv_plugin/demuxer.ex at master ยท membraneframework/membra...
Contribute to membraneframework/membrane_flv_plugin development by creating an account on GitHub.
Apparently there is a small bug in Membrane Core. In your case with
SourceBin
you haven't linked the static output video
pad of the SourceBin
, and the Core should raise an informative exception that all static pads must be linked - which does not happen. Instead, the buffers flow doesn't start. The following children specification should work, though:
Ah I see interesting, now it makes more sense thanks !
You are welcome! In case you are still experimenting with that, just please let us know if my proposed solution works, so we will be able to mark the thread as "closed" ๐
Hey, I tried and it does work ๐