WebRTC Endpoint + Mixing Multiple Tracks into a single mp4
I have a working app that allows a user to "talk" to an LLM. I'm using Membrane to help coordinate the audio. For QA purposes, we record the tracks (one for each endpoint). I'm trying to setup a bin that mixes the two tracks using the Membrane.LiveAudioMixer so I can have a single file. There's no errors thrown, but the resulting file is only 40 bytes, so I suspect I have something misconfigured.
Each time a pad is added, I try piping it into the LiveAudioMixer and then take that output, encode it and write it to the file.
5 Replies
For what it's worth, I'm not using the latest versions of the membrane stack. I made sure to read the relevant docs for those versions, and I thought the above would work.
From mix:
Hi @TonyLikeSocks, would you mind checking on the latest versions of the plugins? My suspicion is that we're not passing the timestamps correctly, and the mixer relies on them. We've improved timestamps handling a lot recently, so it may already be fixed.
Ahh, I was afraid that'd be the path forward. I tried a few months ago and hit some snags. From memory, it had to do with how we're using
Ratio
-- I'll try again and spin up a new thread with where I get stuck.
Thanks @mat_hekFWIW we're locked on ratio 3.0, however 4.0 should work without problems, so you can override. We're going to allow 4.0 starting with core 1.1
Looks like I've got Ratio 2.0 in my deps. I'll have to read up on the changes
It's a child dependency pulled in. I don't list it as an explicit dependency.