Jdyn
SMSoftware Mansion
•Created by Jdyn on 7/4/2024 in #membrane-help
Live video effects in fishjam
Great I appreciate the links and examples. I think I have a decent idea of what to do after both of your guys help. Cheers
6 replies
SMSoftware Mansion
•Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
14 replies
SMSoftware Mansion
•Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
Hey @kamilstasiak I've been following your experimental changes on the ts-sdk and react sdk. I was wondering what your thoughts on it so far are? Are the changes working as you envisioned and solving some of the complications with connecting / reconnecting / race conditions / device state? Do you plan on merging them soon or is there still more to be done?
14 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
So is the way you create multi-channel audio by basically specifying the amount of channels, and then in the binary, you place one frame of audio from each track one after another for that time frame and then repeat?
Then its up to the receiver to basically splice out each channels "frame" to reconstruct the original tracks?
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Is it more difficult / performance intensive to process such multi channel audio?
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
In my case I am able to send
s16le
to the service. So what I do is process the jellyfish tracks down into s16le
then ideally send it through the interleaver, after that no further processing is needed in my case, I just send it
What problems might occur if I did have further processing? Maybe later down the line I'd like to do something further to the audio 🤔53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Thanks a lot for your responses.
So the high level problem is that I need to transcribe what each person is saying individually without losing who is saying what. There is a service that can transcribe each channel in the audio individually.
One thing is that I do know the "max" amount of output channels that would ever be added for my case, but pads should be able to be removed and added if someone disconnects / leaves 🤔 .
so I wonder if the interleaver can be modified to allow for inputs to be added and removed freely, but still require to specify the max output channels? My thought is that if we know the max channels upfront, we can fill channels that are not connected to an input with silence? Not sure
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Currently im using the LiveAudioMixer as you suggested and it's working well. But I would now instead like to take all of the audio tracks in the room and create a single multi-channel output, with each track on a different channel.
So if there are two peers in the room, I'd like to combine their audio into a single output with 2 channels. Is there a plugin that can do this? I am not entirely familiar with audio channels
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
thank you
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Damn it worked
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Hey im looking into setting up a test environment using the file endpoint as sources but the test crashes with
I am unsure what the error means or how or why the error occurs because I am not trying to unlink anything.
I'd just like to simulate adding my endpoint that is listening on track added events, and then adding file endpoints with audio so that the custom endpoint sees the tracks and starts processing them.
The test file is here https://github.com/Jdyn/membrane_scribe/blob/main/test/scribe_endpoint_test.exs
My thought is that the audio file is finishing and then it removes itself from the engine, but my endpoint doesn't know how to handle tracks being removed? I guess in general I am not sure how to gracefully handle what to do when a track is removed
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Thanks for all your help. Probably going to be working on this for a while 😅
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
On a side note, is there any easy way to create tests for the whole thing with two peers producing an audio stream? I was looking at the engine integration test but not sure the best way to work with it
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
So if I have the two peer audio tracks from the JF room as input pads to a filter with
manual
, whenever I "demand" more data from the pads, am I going to get data from the point that I last demanded or am I going to "miss" some audio, if enough comes in before I next "demand" it?53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Thanks it worked 👍
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
LiveQueue does this operation
pts = pts + queue.offset
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Thanks that worked. I've gotten to the point where
handle_buffer
is getting called in the filter, but this pts and dts are nil
which is preventing the buffer from getting added to LiveQueue
https://github.com/membraneframework/membrane_audio_mix_plugin/blob/v0.16.0/lib/membrane_live_audio_mixer/live_queue.ex#L123
How come the pts field is nil in my case / how do I populate it?53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Here, I think there is something very wrong with the setup https://github.com/Jdyn/membrane_scribe/blob/main/lib/scribe_endpoint.ex#L61
53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
Yeah I've now switched to keeping it as simple as possible for now but still trying to wrap my head around membrane, it is very complex 😅
my current goal is to keep the streams separate, run each track into the bumble bee translator, then basically have the response returned to the rtc endpoint bin, and then just print the translations using the debug sink.
Once I create the spec in
handle_pad_added
, I cannot seem to be able to pass the tracks into a filter where I am trying to start the translation process.
I am getting this error that I can't seemt o find documentation for. A timeout occurs whenever I add my custom filter to the pad_added spec
I've uploaded my progress here but it is a nightmare https://github.com/Jdyn/membrane_scribe/tree/main/lib53 replies
SMSoftware Mansion
•Created by Jdyn on 3/11/2024 in #membrane-help
Developing an advanced Jellyfish use case
I just noticed this got merged yesterday which seems like it would allow me to stream multiple inputs into whisper at once. That is, all of the audio streams at once using a batched run 🤔
https://github.com/elixir-nx/bumblebee/issues/261
53 replies