Software Mansion

SM

Software Mansion

Join the community to ask questions about Software Mansion and get answers from other members.

Join

Supporting interruptions in OpenAI Realtime demo

I'm trying to add interruptions support in https://github.com/membraneframework/membrane_demo/blob/master/livebooks/openai_realtime_with_membrane_webrtc/openai_realtime_with_membrane_webrtc.livemd Seems like the Realtime API already sends an event for when the audio needs to stop playing(status = cancelled): ```...

membrane_rtc_engine/membrane_rtc_engine_ex_webrtc error

Hi, while trying to use membrane_rtc_engine with package membrane_rtc_engine_ex_webrtc I'm seeing there's a dependency mismatch. Ie; the example at the bottom of this page (https://github.com/fishjam-cloud/membrane_rtc_engine/tree/master?tab=readme-ov-file#repository-structure): ```...

Demuxing Safari MP4

Hi! I'm trying to use a MediaRecorder to record audio/mp4 on Safari, and then handle it using Membrane. Membrane.MP4.Demuxer.ISOM gives me an error: ``` Error parsing MP4 box: moof / traf / tfhd...

SDL plugin fails to initialize

Hi. I am trying to play udp stream via SDL sink but if fails to initialize. I am on archlinux and using hyperland(wayland) which may be the cause of problem. I have attached error and pipeline....

HTTP adaptive stream continuous segments

Hey there! I have a question regarding https://github.com/membraneframework/membrane_http_adaptive_stream_plugin library. Is there a way to configure the starting number for the segment, partial_segment or header? The use case is - we want to keep the segments, headers and partial segments counter continuous after restarting the stream. Is that even possible? Any points against such an approach? ...

Sections of files

I have a bunch of mp4 files sitting on disk and I'd like clients to be able to request arbitrary segments of them (e.g., starting 200 seconds in until 250s). I'm a beginner in any kind of digital video, is this even slightly viable with membrane somehow?

Pipeline Error: Pipeline Failed to Terminate within Timeout (5000ms)

This is a bit of a head scratcher for me. I'm in the process of writing a new element for my pipeline that uses the Silero VAD module for speech detection (rather than the built in WebRTC Engine VAD extension). I've got it working, but hitting a wierd bug. Now when my engine terminates (peer leaves), I'm getting this error: ** (Membrane.PipelineError) Pipeline #PID<0.1499.0> hasn't terminated within given timeout (5000 ms). The only thing that's changed is my new element in the pipeline (it's setup as a Membrane.Filter). If I remove the element from the pipeline, then the error goes away. ...

burn in caption to mp4

Thanks first of all for building the whole family of products. To learn Membrane & friends, I want to build a simple Livebook-based tool that starts with a video and its webVTTs (multilingual), generate the image sequence (with Image or Typst), then burn the image sequence captions into the video. Is Membrane, Live Compositor, or Boombox better suited for this purpose?...

stream RTMP to VLC Network stream

Hello, currently we are using membrane_rtmp_plugin to receive RTMP Stream as source (with help of Membrane.RTMPServer and Membrane.RTMP.SourceBin). All is fine, we migrated successfully to 0.26.0 version, which simplifies pipeline a lot. Also we did a POC of streaming RTMP to streaming service (Youtube) and everything is working as expected. I am curious, is there any way to stream RTMP to VLC Player (probably it is called pull approach)? I mean File -> Open Network -> Specify URL (eg. r...

Fly.io + UDP

I've got an membrane_webrtc server setup where someone can "call" an LLM and talk with them (audio only, no video). It largely works, though my users are reporting random disconnects. The console errors match the attached image. I'm a little thrown since the url in that message specifies UDP as the transport. I deployed to fly.io, and explicitly did not open up the UDP ports in my fly.toml, so I'm wondering why the app is failing with a UDP timeout. Am I incorrect in assuming that I can force all traffic over TCP by just not opening it up? Shoudl I also figure out UDP? On UDP, I read through this: https://github.com/fishjam-dev/fishjam-docs/blob/main/docs/deploying/fly_io.md But it's Fishjam specific, and it doesn't line up neatly with my app which is based on the old membrane video room repo (https://github.com/membraneframework-labs/membrane_videoroom). Where does fly-global-services get specified in that case? I'm not explicitly setting a TURN_LISTEN_IP. I traced through things I think it could be here in turn_ip (and then the turn_mock_ip is my external IPv4 address)...
No description

web rtc engine and erlang clustering / load balancing

We are currently experiencing a problem where when we deploy our video room to production, which is a two node cluster (also with a load balancer in fly.io in front), the call drops for some people when another one joins, basically randomly. We know it's related to that because if we scale down to just one instance the calls work as expected. Any ideas what could be causing this?

WebRTC TURN TCP/TLS configuration issue

Hey! I'm loving the framework - using it in production with great success. We've got some users who (I think) are having trouble making the TURN connection to our server using UDP - typically these users are on restrictive corporate VPNs. My thought is that setting up our TCP/TLS TURN properly might help. I've set it up similar to this: https://github.com/fishjam-dev/membrane_rtc_engine/blob/eb8f97d254f5925139cdc1e76df0ecd0ac4977e9/examples/webrtc_to_hls/lib/webrtc_to_hls/stream.ex#L150...

Syncing two streams from HLS source

Hi, I have two streams(coming from a unmuxed HLS stream): a video-only stream and an audio stream. Screenshot shows the beginning of my pipeline. From the kino_membrane graphs, it looks like they are producing buffers at very different rates(screenshots are of output pad). demuxer2 is the audio stream which seems to produce buffers at a much lower rate. ...
No description

Lowest latency h264 UDP video stream possible.

Hello, To give some context I am trying to replicate parts of what https://openhd.gitbook.io/open-hd is doing using elixir, nerves and membrane. ...

Using Google meet as a source

Hi, I'm new to Membrane and have a question about its capabilities. I have a Google Meet URL that I can access using a headless browser, bot, or similar method. Is it possible to use this as a data source for Membrane? Does Membrane support reading from such dynamic sources? #googlemeet...

Background loop with sound effect playback on event

Hello, I'm using Membrane for background music playing locally (portaudio), and I want to be able to play sound effects at will (as in, not related to any timer, just events like pressing a key, for example). Would it make sense to have multiple pipelines, one for each sound effect? I think it would make more sense to have my background loop pipeline and a separate sound effect pipeline, which can play any of the various sound effects, but I'm not sure how that would work. Any help would be great, thanks!...

Retransmit received RTP packets in secure way

Hello! We are working on SFU at the moment and we want to receive RTP packet from one peer, and broadcast it to multiple "listeners". We doing following in the code (we do not match to track_id because there is only video we experimenting with): ```elixir @impl true def handle_info(...

membrane_webrtc_plugin: %Membrane.Buffer with pts: nil, dts: nil received from audio track.

Hello, and thank you for the great ecosystem of libraries! 🙏 Currently, I'm working on SFU, which will utilize the membrane_webrtc_plugin to connect the streamer and the viewers. Everything is working pretty fine with video track. However, when I'm adding the audio one I'm starting to receive a bunch of errors, namely: 1) ArgumentError from membrane_realtimer_plugin, handle_buffer/4 function where it essentially tries to do subtraction from nil:...

Pipeline stuck at MP4 Demuxer

I'm trying to add fMP4 support to https://github.com/kim-company/membrane_hls_plugin. I've managed to send MP4 segments from that plugin and have this pipeline in my app: https://gist.github.com/samrat/055fcba6adf231dfa93930a1141c7d2a I can see that the mp4 segments are indeed coming through. If I write them to a file sink before demuxing, the files are written. But when connected to the ISOM demuxer, the pipeline doesn't seem to process any buffers. ...

how to create mp4 file chunks with File.Sink.Multi and ISOM

My goal is to create a file every few seconds of every few buffers. My approach was to modify File.Sink.Multi and ISOM, and I think I’m close but I’m seeing issues in all but the first file. All the secondary files have the wrong duration and are empty for the first portion of it. Has someone implemented this before? Is there a plugin I can use for this? Otherwise could someone give me some pointers on how to finish this? I believe I modified Multi correctly to handle Seek Sink events. Now with ISOM I finalize the mp4 whenever I get enough buffers, and send the actions. I believe the issue I have is around figuring out how to reset some of the state in the pad tracks so that the timing is correct without restring the tracks completely. Could I get some pointers around this and possibly the actions to send?...
Next