varsill
SMSoftware Mansion
•Created by rohan on 4/9/2025 in #membrane-help
Creating a Phoenix channel source
Hi! It depends if you have any control over the pace at which the audio stream data is coming through the channel.
In case you don't I would start with the simplest scenario, meaning that I would build a
Source
with :output
pad in :push
flow control mode. In handle_info/3
(https://hexdocs.pm/membrane_core/1.2.2/Membrane.Element.Base.html#c:handle_info/3) implementation I would return:
and that's it.
Ofcourse, if audio data is sent faster than the rest of the pipeline can cope with, it might cause trouble, which then could be mitigated with either some Membrane Core buffering capabilities or by implementing source element with :manual
pad and implementing custom buffering inside the element's logic.
If you can send some "demand" messages through your socket and have guarantee, that audio data chunk won't be sent until it's explicitly requested, then implementing a :manual
pad is a way to go for you.
Best wishes! 😉5 replies
SMSoftware Mansion
•Created by nickdichev_fw on 5/9/2023 in #membrane-help
Intermittent Failures with RTMP Sink
Hi! Do you have both the
:audio
and the :video
track connected to your RTMP sink?8 replies
SMSoftware Mansion
•Created by odingrail on 3/28/2025 in #membrane-help
High WebRTC CPU consumption
Hello, great you got it figured out!
Am I right that it was somewhere in your custom
source
element where the TCP socket was operating in an active mode?18 replies
SMSoftware Mansion
•Created by odingrail on 3/28/2025 in #membrane-help
High WebRTC CPU consumption
Hello! Oh, I've forgotten about that
unsafely_name_processes_for_observer
config option, it has one main drawback as it works only for a single node, but it's great for debugging. Also thanks for spotting the bug in the documentation!
Concerning the results from top
, it's definitely odd that the main thread is using that much of CPU. At first sight I would say that it might have something to do with BEAM spending too much time on threads synchronization (which used to be a case with the desired number of schedulers improperly resolved in environments with cgroups
limits assigned), but since you are using a droplet I don't think that's an issue.
Could you try to gather microstate accounting statistics for let's say a minute of the system running? (https://www.erlang.org/doc/apps/erts/erlang.html#statistics_microstate_accounting)
We should be able to see more precisely what BEAM is busy at.
If it doesn't tell us much, I am afraid we might need to use perf
to see particular calls that use most of the CPU.
Concerning erts_sched_
CPU usage you can try experimenting with disabling busy waiting for the schedulers with +sbwt none
option (https://www.erlang.org/doc/apps/erts/erl_cmd.html#+sbt) to see (more or less) how much of CPU is indeed spent on code execution.18 replies
SMSoftware Mansion
•Created by odingrail on 3/28/2025 in #membrane-help
High WebRTC CPU consumption
Concerning telemetry events, I think it might have something to do with a recent bug report. It turns out that by mistake the second element of the "suggested format" you mentioned is the component's module instead of the "component type" (by "component type" I mean :element, :bin or :pipeline). We fixed it here: https://github.com/membraneframework/membrane_core/pull/958 and we will soon release v1.2.3 with that change. As of now could you check if telemetry is working for you with membrane_core fixed on
957-bugfix-component-type-v-component-module
branch?18 replies
SMSoftware Mansion
•Created by odingrail on 3/28/2025 in #membrane-help
High WebRTC CPU consumption
Hello! Definitely something is suspicious with that high CPU consumption for 10 streams and it requires some inspection. I assume that you don't do any transcoding of the video track, do you?
Concerning the CPU usage monitoring, there are 2 places I would inspect:
1. use
:observer
, find a process with suspiciously high number of reductions, double click on that process, visit Dictionary tab and read the membrane_path
entry. (Side note: we plan to set label (https://hexdocs.pm/elixir/Process.html#set_label/1) for the element processes to their membrane_path
but it's not available yet so as of now I am afraid you need to read membrane_path
manually.)
2. see top -H -p <pid>
(where <pid> is the OS process of BEAM) and inspect if some OS threads are using extraordinarily high amount of CPU. There are two options here: either these are some "worker" threads spawned in NIFs (then you should see high CPU usage for some custom threads) or NIFs themselves are using much CPU (then you should see that erts_dcpus_*
or erts_sched_*
threads are using much CPU)18 replies
SMSoftware Mansion
•Created by Sameer on 1/5/2025 in #membrane-help
Logs are overrun with `Sending stream format through pad` messages. Am I doing something wrong ?
Great to hear that it no longer produces those
stream_format
logs!
The reason why you observed this strange behaviour was that with this additional "bypass" link from filter3
to filter1
the stream format was duplicated in filter3
. The default implementation of the handle_stream_format
is to return :forward
action, which sends the stream format on ALL available output pads. In your case, it was sending it both to the subsequent filter (as expected) and the the filter1
, through your "bypass" link. Once received by filter1
, it was once again sent to filter2
and filter3
, therefore starting to "loop" 😉9 replies
SMSoftware Mansion
•Created by Sameer on 1/5/2025 in #membrane-help
Logs are overrun with `Sending stream format through pad` messages. Am I doing something wrong ?
Hello!
1) Sending stream format that frequently is almost certainly not an expected behaviour. I suspect that the first element in the pipeline behind the source somehow "duplicates" the stream format (by returning it as an action in one of its callbacks, that frequently gets called). Later on, the stream formats are just passed through other elements and since there is already a lot of them, you will see a lot of logs from other elements.
Could you tell me what is the element right behind your source element?
Generally speaking, the stream format is expected to be sent once, before the first buffer (and possibly once in a while later on, if the format changes, for instance, when the resolution of your video changes).
2) In a first place we should figure out why the stream formats are sent that frequently, but there is a couple of things you can do if you want to preserve your logs:
* you can play around the logger limits: https://www.erlang.org/doc/apps/kernel/logger_chapter.html#message-queue-length, set them high enough for the logs not to be washed out and then redirect the output to some file
* the simplest solution would be to remove the problematic log (https://github.com/membraneframework/membrane_core/blob/ac6b793e2ea46316af569735c63dade1d83f6ddf/lib/membrane/core/element/action_handler.ex#L366 ) in
membrane_core
dependency and recompile it with MIX_ENV=<your env> mix deps.compile membrane core
😉9 replies
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Concerning more verbose logging, I don't think it is currently possible without C code manipulation in the implementation of the sdl_plugin's native part
8 replies
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Sure, you can turn off precompiled dependency with the use of the following configuration:
config :bundlex, disable_precompiled_os_deps: [:membrane_sdl_plugin]
(bundlex will then look for the SDL2 with pkg-config
).8 replies
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Hello @Damirados ! It seems that
SDL_init()
in the native part of the membane_sdl_plugin
code fails. It indeed looks as something related to wayland - could you try setting SDL_VIDEODRIVER=wayland
environmental variable and rerun your script?8 replies
SMSoftware Mansion
•Created by oleg.okunevych on 9/25/2024 in #membrane-help
stream RTMP to VLC Network stream
Hello! Indeed, currently RTMP server only allows clients to
publish
their streams (and RTMP server's user can get data from the published stream via handle_data
callback in the ClientHandler
behaviour). To make RTMP server feature-complete the RTMP server would need to handle play
command as well - with this feature, you could use it in your scenario. It would require some work, in particular we would need to parse play
commands and some other messages etc.
Currently, our plan for development is more focused on getting rid of FFmpeg dependency and rewritting RTMP.Sink
(which provides sender client). Once that is done, adding handling of play
command in the server should be relatively easy.9 replies
SMSoftware Mansion
•Created by andr-ec on 7/9/2024 in #membrane-help
how to create mp4 file chunks with File.Sink.Multi and ISOM
Hello! Could you share some code with us? What's especially interesting is how you have modified the ISOM muxer.
What I suspect that might be happening is that metadata (
:moov
atom of an .mp4 file) is present only in the first file.
Is there a plugin I can use for this?It depends on what you try to achieve. There is a plugin: https://github.com/membraneframework/membrane_http_adaptive_stream_plugin that is capable of creating an HLS playlist with fragmented MP4 chunks, which seems to be a similar scenario to yours. The main difference is that it generates fMP4 chunks, and it also generates playlist's manifest along the way.
2 replies
SMSoftware Mansion
•Created by andr-ec on 7/3/2024 in #membrane-help
Error removing children group and starting new spec in a Bin
Sure, the
Process.send_after
was just a suggestion to workaround the problem (or even more like to check if the problem is indeed caused by the child not being yet removed). For a proper solution, we need something else 😉
If I get it right, you would like to always use a given group of external bin's input pads, and at the same time dynamically change the internal bin's input pads links - I don't think that :on_request
pads are designed for such a purpose. When you internally remove the dynamic link of a bin, the external link is also removed and cannot be "reused".
What I can suggest is to:
1. Create a helper element with inputs pads that are NOT expected to be unlinked (they can either be static pads if you know in advance how many of them are needed, or dynamic pads otherwise - the crucial thing is that you will never remove links ending in these pads) and output pads for which you expect that might be removed. The element should simply forward all the incoming stream on one type of input pads to the corresponding output pad or pads.
2. similarly to the helper element inputs, make the bin's pads NON-unlinkable
2. connect the bin's input pads to the helper element inputs
3. create a new link to the helper element's output pads each time you "switch" the muxers
It might sound complicated, but if you would show me how you create your bin (i.e. the spec:
action where you spawn the bin) I could provide you with some code draft.4 replies
SMSoftware Mansion
•Created by andr-ec on 7/3/2024 in #membrane-help
Error removing children group and starting new spec in a Bin
Hello! Could you show me the
spec
action you are returning after you remove the children group?
From what I can see now, it seems that it's caused by the fact, that child removal is unfortunately not synchronous - returning an action doesn't mean that children are already removed. As a workaround you could try to postpone the children recreation (for instance, use Process.send_after
and add handle_info
, where you would return a spec
). When it comes to a proper solution, I believe we would need to allow synchronizing on the moment when the children are removed in membrane_core.4 replies
SMSoftware Mansion
•Created by tintin on 6/25/2024 in #membrane-help
Modifying pipeline after it has been started
Concerning Tee, each buffer will get a copy of the same buffer, a buffer's payload is just a subject to a regular Erlang binaries handling mechanism
4 replies
SMSoftware Mansion
•Created by tintin on 6/25/2024 in #membrane-help
Modifying pipeline after it has been started
Hello! It's completely fine to spawn children on request, for instance in response to
handle_info
. With many plugins we follow a similar scenario, for instance you can spawn MP4 Demuxer, wait until MP4 demuxer sends new_tracks
notification to the pipeline and then add a new spec, that will handle tracks resolved from MP4 container. It shouldn't have any negative impact on the performance.4 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Hello, well, it depends on what your service expects to receive. However, in most cases, raw (uncompressed) audio is represented in the following PCM format:
1. First, you need to specify some fixed sampling rate (for instance 44100 Hz) and then in your binary audio representation, there are 44100 samples per second.
2. Each sample then represents a value of audio measured at this given point in time, written in some settled format (for instance:
s16le
, meaning that it will be a signed integer written on 16 bits, with little-endian bytes order), for a given number of channels (the audio's value representation for each channel in a particular sample is put one after the other).53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Well, it should work fine with your service then 😉 However I believe it's not that common to use channels for such a purpose - normally the channels are used to describe, for instance, the sound in the background. That's why I think that it might be difficult to "reuse" that audio stream that you send to the service without "deinterleaving" it first - later you could work on multiple audio streams, one per each user.
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Ok, I see the picture now 😉 Indeed, I believe that setting a fixed number of max channels and filling with silence ones that are not yet "occupied" seams to be the reasonable solution - and what further processing needs to be performed on that audio data? Do you need to encode it before sending to the transcription service?
53 replies