Software Mansion

SM

Software Mansion

Join the community to ask questions about Software Mansion and get answers from other members.

Join

Error removing children group and starting new spec in a Bin

I’m trying to remove a child group and start a new spec within a handle child notification within a bin. all the children are defined with refs like {:muxer, make_ref()} the children are also created in a group. The problem is that in that handle_child_notification, if I remove the group and the add the new spec at the same time, it fails with an error:...

How do I invoke Pipeline.handle_info from a Phoenix.WebChannel?

I'm trying to pass audio from a Phoenix Web Channel to a Pipeline. Also, are there any notify_child usage examples? I don't understand how to use Membrane.Pipeline.notify_child. Do I need to define it first? If so, what should the definition look like? Update: I was able to pass a message to the pipeline by making the following changes to my Channel handle_inlike so:...

Modifying pipeline after it has been started

I would like to create/remove additional children to the pipeline after it has start been started and is :playing. I was able to create children after pipeline started but I was wondering if it's fundamentally wrong to do so. Let's say I have this simple membrane pipeline where it reads from a file, pass it to a tee, and have a sink that writes to a file. ```elixir...

WebRTC no audio in incoming audio tracks

We are implementing a video room, similar to the examples in web_rtc_engine (the main difference is using a LiveView to broker the connection betweem client and room (genserver)) and for some reason we get no audio (on Chrome at least) for the other endpoints (local audio, when unmuted in JS, works fine). Possibly related, we see that the controls for volume of the other endpoints are greyed out, as seen in screenshot. Follow up question: is the audio HTML tag needed at all, since the video tag can also play video, or does that affect the tracks that arrive at the RTC engine endpoints?...
No description

Distributing pipeline in erlang cluster.

I started working with membrane few days ago and am implementing this scenario using rtp to send streams between machines but I am wondering if it would be possible to simplify it by using erlang distribution to send data between machines. Scenario: Machine 1: Produces h264 stream from camera and sends it to Machine2...

Issue Membrane Upgrade to 1.1 (from 0.12.9)

I'm trying to go from Membrane 0.12.9 to 1.1 and hitting a wall. I followed the upgrade guide[1], and everything compiles successfully. Though I'm getting the following error in my pipeline: ``` 19:50:34.066 [error] <0.4709.0>/{:endpoint, "conversation_endpoint"}/:opus_payloader/:header_generator Error occured in Membrane Element: %UndefinedFunctionError{ module: Coerce.Implementations.Atom.Integer,...

WebRTC Endpoint + Mixing Multiple Tracks into a single mp4

I have a working app that allows a user to "talk" to an LLM. I'm using Membrane to help coordinate the audio. For QA purposes, we record the tracks (one for each endpoint). I'm trying to setup a bin that mixes the two tracks using the Membrane.LiveAudioMixer so I can have a single file. There's no errors thrown, but the resulting file is only 40 bytes, so I suspect I have something misconfigured. Each time a pad is added, I try piping it into the LiveAudioMixer and then take that output, encode it and write it to the file. ```...

Loop Audio File

I have a little Membrane Pipeline with a video, and some audio tracks. I want to add some background music to it. A short track, that just loops over and over, until the video is done. I've looked at doing something like: ```elixir child(:mp3_source_bg, %Membrane.File.Source{ location: state.background_audio.path,...

Running Docker image with Membrane RTMP Plugin

Hello, I am trying to run a Docker container with membrane_rtmp_plugin library, it builds successfully, however I am getting following error: ``` 2024-04-26 16:11:04 =CRASH REPORT==== 26-Apr-2024::13:11:04.943773 === 2024-04-26 16:11:04 crasher: 2024-04-26 16:11:04 initial call: kernel:init/1...

ex_dtls won't compile

I'm sure this is a me issue, but I'm stumped. I've got a membrane project that worked on a different computer. Both are Macs Running mix deps.compile throws an error: `ld: library 'ssl' not found...

terminate part of pipeline children

hi, I have the following: ParticipantPipeline with multiple children: - :vr_publisher - :vr_subscriber - :vr_screen_subscriber...

ex_dtls NIF crash when starting server

``` root@908001db526468:/app/bin# ./passion_fruit start =ERROR REPORT==== 12-Apr-2024::09:51:36.565861 === Error in process <0.6069.0> with exit value: {undef,...

Dynamically starting children to Demux Mp4 tracks

I want to convert this to take arbitrary user uploaded Mp4 files where the tracks can have different indexes: ``` structure = [ child(:video_source, %Membrane.File.Source{ location: @input_file...

H264.FFmpeg.Decoded frames to MP4.Muxer

I'm attempting to open a local mp4, demux it, and write it back to an mp4, just to get started. I want to do stuff with overlay images and add sound clips once this basic thing is working. This is my spec: ``` structure = [ child(:video_source, %Membrane.File.Source{ location: "example_data/example.mp4"...

Wiring up Javascript FE Using membrane-webrtc-js

Sorry if this obvious. I'm looking through the example in the membrane_rtc_engine (link below). It's not obvious to me how the audio playback for remote endpoints is managed. Does the membrane-webrtc-js take care of that magically? I see addVideoElement -- but that just seems to add an HTMLVideo element, but doesn't actually connect it to anything from the endpoint / tracks. https://github.com/jellyfish-dev/membrane_rtc_engine/blob/master/examples/webrtc_videoroom/assets/src/room.ts...

Filter with `push` flow_control

Hello, I have a filter that transcribes audio as it receives it by sending it to a transcription service(via a websocket). I also have a VAD filter(applied before the audio data arrives to the Membrane pipeline). I'm seeing that the audio data only gets sent once the buffer is full(when there is enough voice audio). I was trying to change the flow_control to :push for the transcription filter for this. (Is that the right solution?)...

LL-HLS broadcasting

Hello everyone! I am trying to make LL-HLS broadcasting work. I used the demo from webrtc_to_hls and setup partial_segment_duration to 500ms,...

Pipeline children started twice

Hello, I'm seeing children in a Membrane pipeline get started twice: I think this might be an issue with how I'm starting the pipeline(everytime a websocket connection is created), but I can't figure out exactly why this is happening....

Writing a `Bin` queuing content from multiple remote files

@skillet wrote in https://discord.com/channels/464786597288738816/1007192081107791902/1224491418626560121
Hello all. New to the framework (and elixir) and still a little fuzzy on how to implement my idea. Basically I want to stitch together a bunch of wav and/or mp3 files and stream them indefinitely. Like a queue where I can keep adding files and the pipeline should grab them as needed FIFO style. The files will be downloaded via HTTP. So what I'm currently envisioning is a Bin that uses a Hackney source element to grab the file and push it on down. Then, when it's done it will get replaced with a new Hackney source pointing to the next file. ...

Split audio file into 20mb chunks

Im trying to figure out how to take the file at this URL, and send it to OpenAI in chunks of 20mb: https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chrt.fm/track/3F7F74/traffic.megaphone.fm/SCIM6504498504.mp3?updated=1710126905 Any help would be amazing!!...