Software Mansion

SM

Software Mansion

Join the community to ask questions about Software Mansion and get answers from other members.

Join

web rtc engine and erlang clustering / load balancing

We are currently experiencing a problem where when we deploy our video room to production, which is a two node cluster (also with a load balancer in fly.io in front), the call drops for some people when another one joins, basically randomly. We know it's related to that because if we scale down to just one instance the calls work as expected. Any ideas what could be causing this?

WebRTC TURN TCP/TLS configuration issue

Hey! I'm loving the framework - using it in production with great success. We've got some users who (I think) are having trouble making the TURN connection to our server using UDP - typically these users are on restrictive corporate VPNs. My thought is that setting up our TCP/TLS TURN properly might help. I've set it up similar to this: https://github.com/fishjam-dev/membrane_rtc_engine/blob/eb8f97d254f5925139cdc1e76df0ecd0ac4977e9/examples/webrtc_to_hls/lib/webrtc_to_hls/stream.ex#L150...

Syncing two streams from HLS source

Hi, I have two streams(coming from a unmuxed HLS stream): a video-only stream and an audio stream. Screenshot shows the beginning of my pipeline. From the kino_membrane graphs, it looks like they are producing buffers at very different rates(screenshots are of output pad). demuxer2 is the audio stream which seems to produce buffers at a much lower rate. ...
No description

Lowest latency h264 UDP video stream possible.

Hello, To give some context I am trying to replicate parts of what https://openhd.gitbook.io/open-hd is doing using elixir, nerves and membrane. ...

Using Google meet as a source

Hi, I'm new to Membrane and have a question about its capabilities. I have a Google Meet URL that I can access using a headless browser, bot, or similar method. Is it possible to use this as a data source for Membrane? Does Membrane support reading from such dynamic sources? #googlemeet...

Background loop with sound effect playback on event

Hello, I'm using Membrane for background music playing locally (portaudio), and I want to be able to play sound effects at will (as in, not related to any timer, just events like pressing a key, for example). Would it make sense to have multiple pipelines, one for each sound effect? I think it would make more sense to have my background loop pipeline and a separate sound effect pipeline, which can play any of the various sound effects, but I'm not sure how that would work. Any help would be great, thanks!...

Retransmit received RTP packets in secure way

Hello! We are working on SFU at the moment and we want to receive RTP packet from one peer, and broadcast it to multiple "listeners". We doing following in the code (we do not match to track_id because there is only video we experimenting with): ```elixir @impl true def handle_info(...

membrane_webrtc_plugin: %Membrane.Buffer with pts: nil, dts: nil received from audio track.

Hello, and thank you for the great ecosystem of libraries! 🙏 Currently, I'm working on SFU, which will utilize the membrane_webrtc_plugin to connect the streamer and the viewers. Everything is working pretty fine with video track. However, when I'm adding the audio one I'm starting to receive a bunch of errors, namely: 1) ArgumentError from membrane_realtimer_plugin, handle_buffer/4 function where it essentially tries to do subtraction from nil:...

Pipeline stuck at MP4 Demuxer

I'm trying to add fMP4 support to https://github.com/kim-company/membrane_hls_plugin. I've managed to send MP4 segments from that plugin and have this pipeline in my app: https://gist.github.com/samrat/055fcba6adf231dfa93930a1141c7d2a I can see that the mp4 segments are indeed coming through. If I write them to a file sink before demuxing, the files are written. But when connected to the ISOM demuxer, the pipeline doesn't seem to process any buffers. ...

how to create mp4 file chunks with File.Sink.Multi and ISOM

My goal is to create a file every few seconds of every few buffers. My approach was to modify File.Sink.Multi and ISOM, and I think I’m close but I’m seeing issues in all but the first file. All the secondary files have the wrong duration and are empty for the first portion of it. Has someone implemented this before? Is there a plugin I can use for this? Otherwise could someone give me some pointers on how to finish this? I believe I modified Multi correctly to handle Seek Sink events. Now with ISOM I finalize the mp4 whenever I get enough buffers, and send the actions. I believe the issue I have is around figuring out how to reset some of the state in the pad tracks so that the timing is correct without restring the tracks completely. Could I get some pointers around this and possibly the actions to send?...

Does Overlay plugin support 30 fps?

I'm interesting in using Membrane Overlay Plugin to do real-time video processing. However, after reading this discussion, the maintainer mentioned that Image library is not suitable for real-time frame processing (I assume this meant 30 fps because it is taking ~40ms per frame. ``` Full encode: 43ms ...

Live video effects in fishjam

Hey, I am curious about how possible it is to implement live effects into fishjam webrtc streams from the server using a custom endpoint with something like video-compositor in real time. How possible is this with membrane currently? Excuse the video but here is an example of an effect I am talking about https://www.youtube.com/shorts/4YaE5u1mjlc...

H265 choppy playback in QuickTime on Mac

I have issues with h265 files and I've reduced the bug to just demuxing and muxing an mp4. The input file plays fine in QuickTime, but the output plays with stutters. It does play fine in another player like IINA. See here for a simple demux/mux pipeline: https://gist.github.com/Doerge/903243abde51bda3468f20ca27fc966f QuickTime is a piece of garbage, but it's the default player for Mac users, so I can't get around it. Anyone experienced this, and resolved it somehow? Files below are input and output file respectively....

Error removing children group and starting new spec in a Bin

I’m trying to remove a child group and start a new spec within a handle child notification within a bin. all the children are defined with refs like {:muxer, make_ref()} the children are also created in a group. The problem is that in that handle_child_notification, if I remove the group and the add the new spec at the same time, it fails with an error:...

How do I invoke Pipeline.handle_info from a Phoenix.WebChannel?

I'm trying to pass audio from a Phoenix Web Channel to a Pipeline. Also, are there any notify_child usage examples? I don't understand how to use Membrane.Pipeline.notify_child. Do I need to define it first? If so, what should the definition look like? Update: I was able to pass a message to the pipeline by making the following changes to my Channel handle_inlike so:...

Modifying pipeline after it has been started

I would like to create/remove additional children to the pipeline after it has start been started and is :playing. I was able to create children after pipeline started but I was wondering if it's fundamentally wrong to do so. Let's say I have this simple membrane pipeline where it reads from a file, pass it to a tee, and have a sink that writes to a file. ```elixir...

WebRTC no audio in incoming audio tracks

We are implementing a video room, similar to the examples in web_rtc_engine (the main difference is using a LiveView to broker the connection betweem client and room (genserver)) and for some reason we get no audio (on Chrome at least) for the other endpoints (local audio, when unmuted in JS, works fine). Possibly related, we see that the controls for volume of the other endpoints are greyed out, as seen in screenshot. Follow up question: is the audio HTML tag needed at all, since the video tag can also play video, or does that affect the tracks that arrive at the RTC engine endpoints?...
No description

Distributing pipeline in erlang cluster.

I started working with membrane few days ago and am implementing this scenario using rtp to send streams between machines but I am wondering if it would be possible to simplify it by using erlang distribution to send data between machines. Scenario: Machine 1: Produces h264 stream from camera and sends it to Machine2...

Issue Membrane Upgrade to 1.1 (from 0.12.9)

I'm trying to go from Membrane 0.12.9 to 1.1 and hitting a wall. I followed the upgrade guide[1], and everything compiles successfully. Though I'm getting the following error in my pipeline: ``` 19:50:34.066 [error] <0.4709.0>/{:endpoint, "conversation_endpoint"}/:opus_payloader/:header_generator Error occured in Membrane Element: %UndefinedFunctionError{ module: Coerce.Implementations.Atom.Integer,...

WebRTC Endpoint + Mixing Multiple Tracks into a single mp4

I have a working app that allows a user to "talk" to an LLM. I'm using Membrane to help coordinate the audio. For QA purposes, we record the tracks (one for each endpoint). I'm trying to setup a bin that mixes the two tracks using the Membrane.LiveAudioMixer so I can have a single file. There's no errors thrown, but the resulting file is only 40 bytes, so I suspect I have something misconfigured. Each time a pad is added, I try piping it into the LiveAudioMixer and then take that output, encode it and write it to the file. ```...