bundlex nifs and libasan
Is it anyone build nifs with libasan support?
Even if I put compiler_flags ["-fno-omit-frame-pointer -fsanitize=address"] it doesn't detect leaks I intentionally left in nif code.
I run elixir with ERL_EXEC="cerl" and set -asan option for vm and erlang is running with address_sanitizer flag....
unifex seg fault on handle_destroy_state
Hi, i'm implementing g772.1 decoder/encoder plugin and have issue with handle_destroy state.
I've taken freeswitch g7221 implementation(https://github.com/traviscross/freeswitch/tree/master/libs/libg722_1/src).
I have the following state:
```c...
Developing an advanced Jellyfish use case
Hey I've been using jellyfish to develop a platform for essentially one-on-one calls between two people and it works really well.
I'd like to now bring in something more advanced. I essentially want to:
1. take the two audio streams of the two peers from jellyfish and convert everything their saying into text using something like bumblebee whisper....
toilet capacity of outbound_rtx_controller
Hi, I'm getting the following error on SessionBin:
```
[error] <0.1282.0>/:sip_rtp/{:outbound_rtx_controller, 1929338881} Toilet overflow.
...
On JF Tracks and Reconnecting (in React)
So I noticed a few things about the react-sdk and JF tracks in general. Note I have react code that works identical to the videoroom demo.
If you're connected to a jellyfish room and then abruptly refresh the browser, a new set of media device ids are created which causes a new set of addTrack calls. I'm not sure if I am doing something wrong or this is intended, but
since new ids are created, new tracks are added to the peer on refresh without being able to remove the old ones since any clean-up code is never fired on refresh. And even when I disconnect gracefully, the removeTrack call fails as described below.
...
h264 encoder problems
hi guys, I'm using h264 encoder plugin for video encoding and sending it via rtp to client.
Sometimes video play on client speeds up or speeds down.
How to debug such cases and what could be a reason of such lagging video? Network issues?
input for encoder is coming from webrtc source...
Pipeline for muxing 2 msr files (audio and video) into a single flv file
I have the following pipeline which takes 2 msr files (recorded to disk using the RecordingEntrypoint from rtc_engine) and need to create a single video+audio file from it (trying flv at the moment but not tied to a specific type, just want something that popular tools can read and manipulate).
My problem is that the end FLV file only plays audio. Here's the pipeline:
```
spec = [...
MP3 output is audible, but test not pass
Hi everyone. I made small changes in
membrane_mp3_lame_plugin
to support other input config (the original repo only support 44100/32/1
).
(patch branch: https://github.com/yujonglee/membrane_mp3_lame_plugin/commits/patch/)
After the change, I run the test, but test does not pass.
But when I play the generated output file, it is audible and feels same as ref.mp3
....Grab keyframe image data from h264 stream?
We are looking at some h264 coming from an RTSP stream. Membrane is doing a fine job with the HLS demo turning it into HLS. But we want to grab images for processing and such. I didn't find anything conclusive on how to do this. I remember being able to do something with keyframes and images but can't find it.
Unity client?
Thanks a lot to the Membrane team. The number of examples and availability of code has been incredibly helpful.
I was wondering if anyone has built a C# client. Specifically to work with Unity and WebRTC: https://docs.unity3d.com/Packages/[email protected]/manual/index.html
I'm building an audio experience for VR and the Oculus Quest 2 ... and although I know with Jellyfish there's an Android client (and React Native client, which works great) and TypeScript client as well ... just wondering if anyone has been down the Unity path....
WebRTC stream not working
I've hit an issue trying to get an MPEGTS stream displaying in the browser via WebRTC.
All seems to work, I can add the WebRTC endpoint for the stream, to the RTC Engine, the web client successfully connects, negotiates tracks, handles the offer and sdpanswer etc and I can add the transceiver for the stream to the RTCPeerConnection and get the MediaStream. When I add the MediaStream to the video element srcObject I get a blank screen with a spinner and see a continuous stream of
Didn't receive keyframe for variant:
messages on the server side:
```
[debug] Responding to STUN request [UDP, session q02, anonymous, client 100.120.41.48:50473]...Error when compiling free4chat
This is probably pretty basic but I'm at square one. I get an error when I'm compiling free4chat, a user-contributed elixir app that depends on membrane.
> mix ecto.reset
```...Screen Share
Is there any reason why jellyfish_videoroom when is running localy share screen is not working ?
I start a call, then connect another user in other tab, and press screen share and I can see in my tab, but another tab only 2 videos, without screen share...
Testing Membrane Element
I'm trying to setup a simple test of a membrane element, but I'm stumped a bit on how to assert that the element is sending the correct output. For context, this is an element that accepts an audio stream, sends it to a speech 2 text api and then forwards along a text buffer.
The test fails, as there's no 'buffer' in the mailbox. Though I'm positive that's what my element emits when the stream is complete. I've tried longer timeouts (10s) but that doesn't alter the test.
I could use some advice....
Clustering and scale out behaviour
Context: I've a membrane application running in an elixir cluster of 1. It receives a RTSP stream (UDP packets) from a client and does the whole streaming thing - awesome!
If/when the cluster expands, there will be multiple nodes receiving UDP packets (the packets are load-balanced between nodes). Does Membrane have any handling to route the packet to the correct node? 🤔...
RTSP authentication problem?
Hi,
I am playing with an RTSP camera (Tapo C210) that is on the local network.
The path to the stream is with credential obfuscated as and .
I tested the endpoint with VLC and it works and I can see the stream so the credentials are good.
I wanted to obtain some information from the camera so I wanted to get information about the session as per the RTSP documentation:...
rtsp://myadmin:[email protected]:554/stream1
rtsp://myadmin:[email protected]:554/stream1
myadmin
myadmin
mypassword
mypassword
Extending the jellyfish video room demo with a queue
Hey I am curious what a scalable way would be to add a queue in front of jellyfish rooms. I have a setup very similar to the jellyroom demo.
My current attempt is adding additional state for the queue to the
RoomService
module and somehow using the max_children
option of the DynamicSupervisor
to add to the queue but its getting super convoluted to manage the state
Would creating another GenServer like RoomQueue
to manage queueing rooms be a good idea? Any ideas would be appreciated...Debugging bundlex/unifex errors
Hello, I've been tinkering with membrane cross-compiled to Nerves (rpi4).
I've had features (e.g. microphone input) working recently but it seems it has broken with recent upgrades.
Would anyone have any tips on debugging the
unifex_create/3
error below?...RTP to HLS Disconnect and Reconnect Audio Stream
Hello everyone,
I currently have a microphone input which is sending UDP to a server and into a RTP input and finally streaming over HLS. The entire flow is working fine, however, I am trying to handle a scenario where the microphone gets disconnected and when it reconnects and starts sending UDP packets over to the server, I start receiving:
...
[warning] <0.2197.0>/:rtp/{:stream_receive_bin, 1}/:packet_tracker Dropping packet 39181 with big sequence number difference (-13014)
[warning] <0.2197.0>/:rtp/{:stream_receive_bin, 1}/:packet_tracker Dropping packet 39181 with big sequence number difference (-13014)
Custom RTC Endpoint
Sorry, more random questions!
What's the best place to start when trying to get a custom endpoint to stream mpegts video in real-time to the browser?
I have a working pipeline for mpegts in a UDP stream rendering in the browser via HLS - basically UDP -> MpegTS -> Demuxer -> H264 Parser -> HttpAdaptiveStream. My application is real-time so the HLS lag is too much so WebRTC seems like the way to go....