Loop Audio File
I have a little Membrane Pipeline with a video, and some audio tracks. I want to add some background music to it. A short track, that just loops over and over, until the video is done.
I've looked at doing something like:
```elixir
child(:mp3_source_bg, %Membrane.File.Source{
location: state.background_audio.path,...
Running Docker image with Membrane RTMP Plugin
Hello, I am trying to run a Docker container with
membrane_rtmp_plugin
library, it builds successfully, however I am getting following error:
```
2024-04-26 16:11:04 =CRASH REPORT==== 26-Apr-2024::13:11:04.943773 ===
2024-04-26 16:11:04 crasher:
2024-04-26 16:11:04 initial call: kernel:init/1...ex_dtls won't compile
I'm sure this is a me issue, but I'm stumped. I've got a membrane project that worked on a different computer. Both are Macs
Running mix deps.compile throws an error:
`ld: library 'ssl' not found...
terminate part of pipeline children
hi, I have the following:
ParticipantPipeline with multiple children:
- :vr_publisher
- :vr_subscriber
- :vr_screen_subscriber...
ex_dtls NIF crash when starting server
```
root@908001db526468:/app/bin# ./passion_fruit start
=ERROR REPORT==== 12-Apr-2024::09:51:36.565861 ===
Error in process <0.6069.0> with exit value:
{undef,...
Dynamically starting children to Demux Mp4 tracks
I want to convert this to take arbitrary user uploaded Mp4 files where the tracks can have different indexes:
```
structure = [
child(:video_source, %Membrane.File.Source{
location: @input_file...
H264.FFmpeg.Decoded frames to MP4.Muxer
I'm attempting to open a local mp4, demux it, and write it back to an mp4, just to get started. I want to do stuff with overlay images and add sound clips once this basic thing is working. This is my spec:
```
structure = [
child(:video_source, %Membrane.File.Source{
location: "example_data/example.mp4"...
Wiring up Javascript FE Using membrane-webrtc-js
Sorry if this obvious. I'm looking through the example in the membrane_rtc_engine (link below). It's not obvious to me how the audio playback for remote endpoints is managed. Does the membrane-webrtc-js take care of that magically? I see
addVideoElement
-- but that just seems to add an HTMLVideo element, but doesn't actually connect it to anything from the endpoint / tracks.
https://github.com/jellyfish-dev/membrane_rtc_engine/blob/master/examples/webrtc_videoroom/assets/src/room.ts...Filter with `push` flow_control
Hello, I have a filter that transcribes audio as it receives it by sending it to a transcription service(via a websocket). I also have a VAD filter(applied before the audio data arrives to the Membrane pipeline).
I'm seeing that the audio data only gets sent once the buffer is full(when there is enough voice audio).
I was trying to change the
flow_control
to :push
for the transcription filter for this. (Is that the right solution?)...LL-HLS broadcasting
Hello everyone!
I am trying to make LL-HLS broadcasting work.
I used the demo from webrtc_to_hls and setup partial_segment_duration to 500ms,...
Pipeline children started twice
Hello,
I'm seeing children in a Membrane pipeline get started twice:
I think this might be an issue with how I'm starting the pipeline(everytime a websocket connection is created), but I can't figure out exactly why this is happening....
Writing a `Bin` queuing content from multiple remote files
@skillet wrote in https://discord.com/channels/464786597288738816/1007192081107791902/1224491418626560121
Hello all. New to the framework (and elixir) and still a little fuzzy on how to implement my idea. Basically I want to stitch together a bunch of wav and/or mp3 files and stream them indefinitely. Like a queue where I can keep adding files and the pipeline should grab them as needed FIFO style.
The files will be downloaded via HTTP. So what I'm currently envisioning is a Bin
that uses a Hackney source element to grab the file and push it on down. Then, when it's done it will get replaced with a new Hackney source pointing to the next file.
...
Split audio file into 20mb chunks
Im trying to figure out how to take the file at this URL, and send it to OpenAI in chunks of 20mb: https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chrt.fm/track/3F7F74/traffic.megaphone.fm/SCIM6504498504.mp3?updated=1710126905
Any help would be amazing!!...
bundlex nifs and libasan
Is it anyone build nifs with libasan support?
Even if I put compiler_flags ["-fno-omit-frame-pointer -fsanitize=address"] it doesn't detect leaks I intentionally left in nif code.
I run elixir with ERL_EXEC="cerl" and set -asan option for vm and erlang is running with address_sanitizer flag....
unifex seg fault on handle_destroy_state
Hi, i'm implementing g772.1 decoder/encoder plugin and have issue with handle_destroy state.
I've taken freeswitch g7221 implementation(https://github.com/traviscross/freeswitch/tree/master/libs/libg722_1/src).
I have the following state:
```c...
Developing an advanced Jellyfish use case
Hey I've been using jellyfish to develop a platform for essentially one-on-one calls between two people and it works really well.
I'd like to now bring in something more advanced. I essentially want to:
1. take the two audio streams of the two peers from jellyfish and convert everything their saying into text using something like bumblebee whisper....
toilet capacity of outbound_rtx_controller
Hi, I'm getting the following error on SessionBin:
```
[error] <0.1282.0>/:sip_rtp/{:outbound_rtx_controller, 1929338881} Toilet overflow.
...
On JF Tracks and Reconnecting (in React)
So I noticed a few things about the react-sdk and JF tracks in general. Note I have react code that works identical to the videoroom demo.
If you're connected to a jellyfish room and then abruptly refresh the browser, a new set of media device ids are created which causes a new set of addTrack calls. I'm not sure if I am doing something wrong or this is intended, but
since new ids are created, new tracks are added to the peer on refresh without being able to remove the old ones since any clean-up code is never fired on refresh. And even when I disconnect gracefully, the removeTrack call fails as described below.
...
h264 encoder problems
hi guys, I'm using h264 encoder plugin for video encoding and sending it via rtp to client.
Sometimes video play on client speeds up or speeds down.
How to debug such cases and what could be a reason of such lagging video? Network issues?
input for encoder is coming from webrtc source...
Pipeline for muxing 2 msr files (audio and video) into a single flv file
I have the following pipeline which takes 2 msr files (recorded to disk using the RecordingEntrypoint from rtc_engine) and need to create a single video+audio file from it (trying flv at the moment but not tied to a specific type, just want something that popular tools can read and manipulate).
My problem is that the end FLV file only plays audio. Here's the pipeline:
```
spec = [...