Software Mansion

SM

Software Mansion

Join the community to ask questions about Software Mansion and get answers from other members.

Join

How to generate SSL certs for membrane_rtc_engine dTLS?

Do you have any recommendations on how to obtain SSL certs for use in the handshake_opts of the WebRTC Endpoint?

Pipeline with RTMP source and AAC decoder

Hi everyone 🙂 (message can be redundant with the one I posted on slack, sorry about that) I'm currently building a POC for live transcription using whisper model. I checked Lawik example already which uses either the mic as or a file as a source to achieve it. In my case I do have RTMP source with AAC codec for audio and flv format. ...

Intermittent Failures with RTMP Sink

We are running into some intermittent failures with the RTMP sink. What we notice is that sometimes a given pipeline will stream all the way through and sometimes the RTMP sink will raise when writing frames and crash the pipeline. ``` 22:01:58.807 [error] <0.5365.0>/:rtmp_sink/ Error handling action {:split, {:handle_write, [[:video, %Membrane.Buffer{payload: <<0, 0, 139, 207, 65, 154, 128, 54, 188, 23, 73, 255, 152, 130, 98, 10, 43, 88, 28, 64, 176, 10, 39, 247, 233, 179, 54, 27, 17, 168, 97, 24, 82, 152, 175, 21, 138, 252, 216, 108, 205, 134, ...>>, pts: 27240000000, dts: 27240000000, metadata: %{h264: %{key_frame?: false}, mp4_payload: %{key_frame?: false}}}]]}} returned by callback Membrane.RTMP.Sink.handle_write_list ...

WebRTC to HLS, where does the pipeline happen?

Looking at the demo I can't see a "normal" pipeline. I want to process video frames in a pipeline. Do ML to them and stuff. I want the video from the browser. WebRTC should be a reasonable way to stream that. I want to apply a bunch of stuff between input and final HLS output. Is there a straightforward way for that or is the engine mostly repackaging the video and audio streams?...

Membrane.Source example for RTC Engine

Hi! I'm trying to implement a solution in which I can stream audio from an API to the client via the Membrane RTC Engine. I'm basing my Membrane.Source implementation on Membrane.Hackney.Source, but now I'm a bit stuck on getting the audio to send from that to the WebRTC Endpoint. I've created an endpoint that fowards the output of my pad to Membrane.RTC.Engine.Endpoint.WebRTC.TrackSender, but I'm wondering:...

Problems specifying toilet capacity of Realtimer

I have a pipeline which I see fail intermittently on startup due to a toilet overflow of a realtimer element. The downstream element of the realtimer is a rtmp sink. In my pipeline I have specified the toilet capacity with via_in to the realtimer as shown in the docs: ```...

Confusion on usage of MP4.Demuxer.ISOM

Hi, I'm trying to write a simple pipeline which take in an mp4 source and streams it out via the RTMP sink. I have some general confusion on how to properly wire up the MP4.Demuxer.ISOM. 1. How do I decode/parse the output pads stream format downstream from the source? I haven't seen any examples or demos of transforming the MP4.Payload.{AAC.AVC1} 2. How to properly handle dynamically attaching the output pads for each track to downstream elements? If I want to handle the :new_track message to attach to some sink with pads in the :always availability mode (such as the RTMP sink) I can't attach that track to some grouping of elements which end at the sink temporarily. For example, if I get the :new_track notification for an AAC track I can't attach just the :audio pad of the RTMP sink, because when handling that callback there is no video pad to attach....

PortAudio plugin on Apple Silicon Mac

Not sure what the issue stems from. Opening a microphone source gives me: ``` ** (Membrane.CallbackError) Error returned from Membrane.PortAudio.Source.handle_prepared_to_playing: :pa_open_stream...

Confusion on video compositor timestamp offset

Hi 🙂 I'm working on a POC app and evaluating membrane for a project at work and I have some questions about how the timestamp offset for the video compositor plugin is supposed to work. First the goal of the POC app is to take an arbitrary list of videos and time offsets and dynamically stitch them together into one continuous video (and then stream the result somewhere but I haven't gotten that far yet). Some other members of my team chatted with some of the membrane contributors and they suggested we take a look at the video compositor plugin and provided us some skeleton code, however, I'm struggling to understand how the timestamp offset option that the compositor takes can be used to seek through the input. ...