varsill
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Concerning more verbose logging, I don't think it is currently possible without C code manipulation in the implementation of the sdl_plugin's native part
6 replies
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Sure, you can turn off precompiled dependency with the use of the following configuration:
config :bundlex, disable_precompiled_os_deps: [:membrane_sdl_plugin]
(bundlex will then look for the SDL2 with pkg-config
).6 replies
SMSoftware Mansion
•Created by Damirados on 11/14/2024 in #membrane-help
SDL plugin fails to initialize
Hello @Damirados ! It seems that
SDL_init()
in the native part of the membane_sdl_plugin
code fails. It indeed looks as something related to wayland - could you try setting SDL_VIDEODRIVER=wayland
environmental variable and rerun your script?6 replies
SMSoftware Mansion
•Created by oleg.okunevych on 9/25/2024 in #membrane-help
stream RTMP to VLC Network stream
Hello! Indeed, currently RTMP server only allows clients to
publish
their streams (and RTMP server's user can get data from the published stream via handle_data
callback in the ClientHandler
behaviour). To make RTMP server feature-complete the RTMP server would need to handle play
command as well - with this feature, you could use it in your scenario. It would require some work, in particular we would need to parse play
commands and some other messages etc.
Currently, our plan for development is more focused on getting rid of FFmpeg dependency and rewritting RTMP.Sink
(which provides sender client). Once that is done, adding handling of play
command in the server should be relatively easy.9 replies
SMSoftware Mansion
•Created by andr-ec on 7/9/2024 in #membrane-help
how to create mp4 file chunks with File.Sink.Multi and ISOM
Hello! Could you share some code with us? What's especially interesting is how you have modified the ISOM muxer.
What I suspect that might be happening is that metadata (
:moov
atom of an .mp4 file) is present only in the first file.
Is there a plugin I can use for this?It depends on what you try to achieve. There is a plugin: https://github.com/membraneframework/membrane_http_adaptive_stream_plugin that is capable of creating an HLS playlist with fragmented MP4 chunks, which seems to be a similar scenario to yours. The main difference is that it generates fMP4 chunks, and it also generates playlist's manifest along the way.
2 replies
SMSoftware Mansion
•Created by andr-ec on 7/3/2024 in #membrane-help
Error removing children group and starting new spec in a Bin
Sure, the
Process.send_after
was just a suggestion to workaround the problem (or even more like to check if the problem is indeed caused by the child not being yet removed). For a proper solution, we need something else 😉
If I get it right, you would like to always use a given group of external bin's input pads, and at the same time dynamically change the internal bin's input pads links - I don't think that :on_request
pads are designed for such a purpose. When you internally remove the dynamic link of a bin, the external link is also removed and cannot be "reused".
What I can suggest is to:
1. Create a helper element with inputs pads that are NOT expected to be unlinked (they can either be static pads if you know in advance how many of them are needed, or dynamic pads otherwise - the crucial thing is that you will never remove links ending in these pads) and output pads for which you expect that might be removed. The element should simply forward all the incoming stream on one type of input pads to the corresponding output pad or pads.
2. similarly to the helper element inputs, make the bin's pads NON-unlinkable
2. connect the bin's input pads to the helper element inputs
3. create a new link to the helper element's output pads each time you "switch" the muxers
It might sound complicated, but if you would show me how you create your bin (i.e. the spec:
action where you spawn the bin) I could provide you with some code draft.4 replies
SMSoftware Mansion
•Created by andr-ec on 7/3/2024 in #membrane-help
Error removing children group and starting new spec in a Bin
Hello! Could you show me the
spec
action you are returning after you remove the children group?
From what I can see now, it seems that it's caused by the fact, that child removal is unfortunately not synchronous - returning an action doesn't mean that children are already removed. As a workaround you could try to postpone the children recreation (for instance, use Process.send_after
and add handle_info
, where you would return a spec
). When it comes to a proper solution, I believe we would need to allow synchronizing on the moment when the children are removed in membrane_core.4 replies
SMSoftware Mansion
•Created by tintin on 6/25/2024 in #membrane-help
Modifying pipeline after it has been started
Concerning Tee, each buffer will get a copy of the same buffer, a buffer's payload is just a subject to a regular Erlang binaries handling mechanism
4 replies
SMSoftware Mansion
•Created by tintin on 6/25/2024 in #membrane-help
Modifying pipeline after it has been started
Hello! It's completely fine to spawn children on request, for instance in response to
handle_info
. With many plugins we follow a similar scenario, for instance you can spawn MP4 Demuxer, wait until MP4 demuxer sends new_tracks
notification to the pipeline and then add a new spec, that will handle tracks resolved from MP4 container. It shouldn't have any negative impact on the performance.4 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Hello, well, it depends on what your service expects to receive. However, in most cases, raw (uncompressed) audio is represented in the following PCM format:
1. First, you need to specify some fixed sampling rate (for instance 44100 Hz) and then in your binary audio representation, there are 44100 samples per second.
2. Each sample then represents a value of audio measured at this given point in time, written in some settled format (for instance:
s16le
, meaning that it will be a signed integer written on 16 bits, with little-endian bytes order), for a given number of channels (the audio's value representation for each channel in a particular sample is put one after the other).53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Well, it should work fine with your service then 😉 However I believe it's not that common to use channels for such a purpose - normally the channels are used to describe, for instance, the sound in the background. That's why I think that it might be difficult to "reuse" that audio stream that you send to the service without "deinterleaving" it first - later you could work on multiple audio streams, one per each user.
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Ok, I see the picture now 😉 Indeed, I believe that setting a fixed number of max channels and filling with silence ones that are not yet "occupied" seams to be the reasonable solution - and what further processing needs to be performed on that audio data? Do you need to encode it before sending to the transcription service?
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
BTW - what would you like to achieve with each user's audio being put in a separate channel? I am slightly afraid that even if we were to solve the problem with number of channels changing in time in the
Membrane.AudioInterleaver
, we could encounter a similar problem while trying to encode audio.53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Hi @Jdyn ! As far as I know, the
Membrane.AudioInterleaver
element available in the membrane_audio_mix_plugin
(https://github.com/membraneframework/membrane_audio_mix_plugin/) should be capable of doing so - the only problem is, that it expects a fixed number of output channels to be provided on startup of the element (if I get it correctly, in you scenario, the number of channels might change in time, with users joining and leaving the room).53 replies
SMSoftware Mansion
•Created by spscream on 4/14/2024 in #membrane-help
terminate part of pipeline children
Hello! If I get it correctly, in the pipeline you have multiple
vr_*
elements, one per each participant - is that right? If so, it seems as if you need to use the remove_children
action (https://hexdocs.pm/membrane_core/Membrane.Pipeline.Action.html#t:remove_children/0). Depending on the structure of the bins (and whether or not they use dynamic pads), it might require some further actions. Could you share with us a sketch of your pipeline and definitions of the bins' pads?2 replies
SMSoftware Mansion
•Created by noozo on 4/12/2024 in #membrane-help
ex_dtls NIF crash when starting server
Hello! Sorry for a late response - indeed, I believe that the appropriate bugfix was introduced here: https://github.com/membraneframework/bundlex/releases/tag/v1.4.3 so starting from Bundlex v1.4.3 it shouldn't be a problem anymore
20 replies
SMSoftware Mansion
•Created by granite9069 on 3/16/2024 in #membrane-help
Split audio file into 20mb chunks
Hello! If I get it correctly, you want to chunk the given .mp3 file into several .mp3 files of a desired size, and then provide them as an input to OpenAI service with API of a type similar to that one: https://platform.openai.com/docs/api-reference/audio/createTranscription. If so, you would need to first read that .mp3 via some kind of a HTTP client, then parse the input bytestream to split it into mp3 frames, then accumulate the frames into larger chunks of a size ~20MB and finally send HTTP request to the service. You could do it with the use of Membrane, but you would need to write some custom elements. Then you could create a pipeline of such a form:
where:
*
Membrane.Hackney.Source
is already available in the :membrane_hackney_plugin
package,
* the MP3.Parser
would split the bytestream into MP3 frames based on the MP3 header (it shouldn't be difficult, but we could surely help you write that element, for some information about MP3 you can see the following: https://www.codeproject.com/Articles/8295/MPEG-Audio-Frame-Header)
* the Aggragator
would accumulate the MP3 frames to create a buffers of a size ~20MB
* the HTTP.Sink
would prepare the HTTP requests that are compliant with the OpenAI API and send these requests (you could use https://github.com/benoitc/hackney for that purpose)2 replies
SMSoftware Mansion
•Created by Wojciech Orzechowski on 1/17/2024 in #membrane-help
RTSP authentication problem?
Hello! Indeed, we have already seen that problem 😉 I can suggest a similar solution as I have proposed back when we encountered the problem for the first time - we could add an option to
ExSDP
parser that would allow us to run the parser in a special mode. In that mode, the parser wouldn't raise in case of not being able to parse the SDP's field - instead it would continue on parsing further fields, even if given field is marked as "necessary" by RFC. It is a similar behaviour to the one we have observed with FFmpeg and it should be applicable if you don't need information from the origin
(o=
) field of your SDP (as far as I know the RTSP plugin does not make use of that information). Alternatively, we could add a special case of parsing the o=
field with an additiona "1" as the problem with TP-Link cameras seems to be quite common and no more I would expect it to stop appearing.13 replies