MP3 output is audible, but test not pass
Hi everyone. I made small changes in
membrane_mp3_lame_plugin
to support other input config (the original repo only support 44100/32/1
).
(patch branch: https://github.com/yujonglee/membrane_mp3_lame_plugin/commits/patch/)
After the change, I run the test, but test does not pass.
But when I play the generated output file, it is audible and feels same as ref.mp3
....Grab keyframe image data from h264 stream?
We are looking at some h264 coming from an RTSP stream. Membrane is doing a fine job with the HLS demo turning it into HLS. But we want to grab images for processing and such. I didn't find anything conclusive on how to do this. I remember being able to do something with keyframes and images but can't find it.
Unity client?
Thanks a lot to the Membrane team. The number of examples and availability of code has been incredibly helpful.
I was wondering if anyone has built a C# client. Specifically to work with Unity and WebRTC: https://docs.unity3d.com/Packages/[email protected]/manual/index.html
I'm building an audio experience for VR and the Oculus Quest 2 ... and although I know with Jellyfish there's an Android client (and React Native client, which works great) and TypeScript client as well ... just wondering if anyone has been down the Unity path....
WebRTC stream not working
I've hit an issue trying to get an MPEGTS stream displaying in the browser via WebRTC.
All seems to work, I can add the WebRTC endpoint for the stream, to the RTC Engine, the web client successfully connects, negotiates tracks, handles the offer and sdpanswer etc and I can add the transceiver for the stream to the RTCPeerConnection and get the MediaStream. When I add the MediaStream to the video element srcObject I get a blank screen with a spinner and see a continuous stream of
Didn't receive keyframe for variant:
messages on the server side:
```
[debug] Responding to STUN request [UDP, session q02, anonymous, client 100.120.41.48:50473]...Error when compiling free4chat
This is probably pretty basic but I'm at square one. I get an error when I'm compiling free4chat, a user-contributed elixir app that depends on membrane.
> mix ecto.reset
```...Screen Share
Is there any reason why jellyfish_videoroom when is running localy share screen is not working ?
I start a call, then connect another user in other tab, and press screen share and I can see in my tab, but another tab only 2 videos, without screen share...
Testing Membrane Element
I'm trying to setup a simple test of a membrane element, but I'm stumped a bit on how to assert that the element is sending the correct output. For context, this is an element that accepts an audio stream, sends it to a speech 2 text api and then forwards along a text buffer.
The test fails, as there's no 'buffer' in the mailbox. Though I'm positive that's what my element emits when the stream is complete. I've tried longer timeouts (10s) but that doesn't alter the test.
I could use some advice....
Clustering and scale out behaviour
Context: I've a membrane application running in an elixir cluster of 1. It receives a RTSP stream (UDP packets) from a client and does the whole streaming thing - awesome!
If/when the cluster expands, there will be multiple nodes receiving UDP packets (the packets are load-balanced between nodes). Does Membrane have any handling to route the packet to the correct node? 🤔...
RTSP authentication problem?
Hi,
I am playing with an RTSP camera (Tapo C210) that is on the local network.
The path to the stream is with credential obfuscated as and .
I tested the endpoint with VLC and it works and I can see the stream so the credentials are good.
I wanted to obtain some information from the camera so I wanted to get information about the session as per the RTSP documentation:...
rtsp://myadmin:[email protected]:554/stream1
rtsp://myadmin:[email protected]:554/stream1
myadmin
myadmin
mypassword
mypassword
Extending the jellyfish video room demo with a queue
Hey I am curious what a scalable way would be to add a queue in front of jellyfish rooms. I have a setup very similar to the jellyroom demo.
My current attempt is adding additional state for the queue to the
RoomService
module and somehow using the max_children
option of the DynamicSupervisor
to add to the queue but its getting super convoluted to manage the state
Would creating another GenServer like RoomQueue
to manage queueing rooms be a good idea? Any ideas would be appreciated...Debugging bundlex/unifex errors
Hello, I've been tinkering with membrane cross-compiled to Nerves (rpi4).
I've had features (e.g. microphone input) working recently but it seems it has broken with recent upgrades.
Would anyone have any tips on debugging the
unifex_create/3
error below?...RTP to HLS Disconnect and Reconnect Audio Stream
Hello everyone,
I currently have a microphone input which is sending UDP to a server and into a RTP input and finally streaming over HLS. The entire flow is working fine, however, I am trying to handle a scenario where the microphone gets disconnected and when it reconnects and starts sending UDP packets over to the server, I start receiving:
...
[warning] <0.2197.0>/:rtp/{:stream_receive_bin, 1}/:packet_tracker Dropping packet 39181 with big sequence number difference (-13014)
[warning] <0.2197.0>/:rtp/{:stream_receive_bin, 1}/:packet_tracker Dropping packet 39181 with big sequence number difference (-13014)
Custom RTC Endpoint
Sorry, more random questions!
What's the best place to start when trying to get a custom endpoint to stream mpegts video in real-time to the browser?
I have a working pipeline for mpegts in a UDP stream rendering in the browser via HLS - basically UDP -> MpegTS -> Demuxer -> H264 Parser -> HttpAdaptiveStream. My application is real-time so the HLS lag is too much so WebRTC seems like the way to go....
HTTPAdaptiveStream issue with hls.js
I was wondering if anyone else has experieneced this issues with HLS?
I have a working pipeline generating HLS output which plays fine with Safari and it's native support for HLS.
However, when trying to get playback working in Chrome using hls.js it seems to stick at the buffering stage. It downloads the index.m3u8 file which looks like this:...
Spinning up a new GenServer for each room
I have been learning from the videoroom demo and I have a few questions.
```elixir
meeting.ex
@impl true...
RTP stream
I'm trying to get a simple RTP stream pipeline working but hit the following error:
```
16:22:57.782 [error] GenServer #PID<0.308.0> terminating
** (KeyError) key #Reference<0.1227858978.1981546498.158967> not found in: %{}
:erlang.map_get(#Reference<0.1227858978.1981546498.158967>, %{})...
React-Native connection?
I'm struggling to get a react-native client to connect to my membrane server. I'm just running locally right now. I start my membrane server with EXTERNAL_IP={my ip} mix phx.server. I'm using the
@jellyfish-dev/react-native-membrane-webrtc
client in my react native code
Then I have the following connection code in my react-native view. I see the console log statements for init connect and attempting connect. But it never connects. I don't see a connection message in my phoenix server, nor the successful connection message.
I tried increasing the log verbosity, but didn't get anything out of the logs from react-native. Is there something obviously wrong with my connection string? Is it expecting something different for the server URL?...RTP demo with RawAudio
Hello friends, I'm trying to get microphone input (via
Membrane.PortAudio.Source
) packaged into an RTP stream and sent to a server and can't quite seem to get it right.
Excerpt below based on the demo in membrane-demo/rtp
but with microphone input substituted and newer syntax.
```...Unable to create new endpoints in Membrane RTC Engine 0.14.0
A change was made to the RTC engine, implementing a
to_type_string
function for all existing endpoints. This function seems to be necessary for an endpoint to be added.
This has the side-effect of removing the ability to create new endpoints - only the predefined ones are allowed:
https://github.com/jellyfish-dev/membrane_rtc_engine/blame/master/lib/membrane_rtc_engine/endpoints/webrtc/media_event.ex#L366-L368
...How to pass some client side parameters to an RTMP pipeline
Hey team I have an RTMP pipeline (which I simplified for the purpose of the question):
```
def handle_init(ctx, socket: socket) do
Logger.info("Starting RTMP pipeline")...