HTTPAdaptiveStream issue with hls.js
I was wondering if anyone else has experieneced this issues with HLS?
I have a working pipeline generating HLS output which plays fine with Safari and it's native support for HLS.
However, when trying to get playback working in Chrome using hls.js it seems to stick at the buffering stage. It downloads the index.m3u8 file which looks like this:...
Spinning up a new GenServer for each room
I have been learning from the videoroom demo and I have a few questions.
```elixir
meeting.ex
@impl true...
RTP stream
I'm trying to get a simple RTP stream pipeline working but hit the following error:
```
16:22:57.782 [error] GenServer #PID<0.308.0> terminating
** (KeyError) key #Reference<0.1227858978.1981546498.158967> not found in: %{}
:erlang.map_get(#Reference<0.1227858978.1981546498.158967>, %{})...
React-Native connection?
I'm struggling to get a react-native client to connect to my membrane server. I'm just running locally right now. I start my membrane server with EXTERNAL_IP={my ip} mix phx.server. I'm using the
@jellyfish-dev/react-native-membrane-webrtc
client in my react native code
Then I have the following connection code in my react-native view. I see the console log statements for init connect and attempting connect. But it never connects. I don't see a connection message in my phoenix server, nor the successful connection message.
I tried increasing the log verbosity, but didn't get anything out of the logs from react-native. Is there something obviously wrong with my connection string? Is it expecting something different for the server URL?...RTP demo with RawAudio
Hello friends, I'm trying to get microphone input (via
Membrane.PortAudio.Source
) packaged into an RTP stream and sent to a server and can't quite seem to get it right.
Excerpt below based on the demo in membrane-demo/rtp
but with microphone input substituted and newer syntax.
```...Unable to create new endpoints in Membrane RTC Engine 0.14.0
A change was made to the RTC engine, implementing a
to_type_string
function for all existing endpoints. This function seems to be necessary for an endpoint to be added.
This has the side-effect of removing the ability to create new endpoints - only the predefined ones are allowed:
https://github.com/jellyfish-dev/membrane_rtc_engine/blame/master/lib/membrane_rtc_engine/endpoints/webrtc/media_event.ex#L366-L368
...How to generate SSL certs for membrane_rtc_engine dTLS?
Do you have any recommendations on how to obtain SSL certs for use in the handshake_opts of the WebRTC Endpoint?
Intermittent Failures with RTMP Sink
We are running into some intermittent failures with the RTMP sink. What we notice is that sometimes a given pipeline will stream all the way through and sometimes the RTMP sink will raise when writing frames and crash the pipeline.
```
22:01:58.807 [error] <0.5365.0>/:rtmp_sink/ Error handling action {:split, {:handle_write, [[:video, %Membrane.Buffer{payload: <<0, 0, 139, 207, 65, 154, 128, 54, 188, 23, 73, 255, 152, 130, 98, 10, 43, 88, 28, 64, 176, 10, 39, 247, 233, 179, 54, 27, 17, 168, 97, 24, 82, 152, 175, 21, 138, 252, 216, 108, 205, 134, ...>>, pts: 27240000000, dts: 27240000000, metadata: %{h264: %{key_frame?: false}, mp4_payload: %{key_frame?: false}}}]]}} returned by callback Membrane.RTMP.Sink.handle_write_list
...
WebRTC to HLS, where does the pipeline happen?
Looking at the demo I can't see a "normal" pipeline. I want to process video frames in a pipeline. Do ML to them and stuff. I want the video from the browser. WebRTC should be a reasonable way to stream that. I want to apply a bunch of stuff between input and final HLS output.
Is there a straightforward way for that or is the engine mostly repackaging the video and audio streams?...
Membrane.Source example for RTC Engine
Hi! I'm trying to implement a solution in which I can stream audio from an API to the client via the Membrane RTC Engine.
I'm basing my Membrane.Source implementation on Membrane.Hackney.Source, but now I'm a bit stuck on getting the audio to send from that to the WebRTC Endpoint.
I've created an endpoint that fowards the output of my pad to Membrane.RTC.Engine.Endpoint.WebRTC.TrackSender, but I'm wondering:...
Problems specifying toilet capacity of Realtimer
I have a pipeline which I see fail intermittently on startup due to a toilet overflow of a realtimer element. The downstream element of the realtimer is a rtmp sink.
In my pipeline I have specified the toilet capacity with
via_in
to the realtimer as shown in the docs:
```...Confusion on usage of MP4.Demuxer.ISOM
Hi, I'm trying to write a simple pipeline which take in an mp4 source and streams it out via the RTMP sink. I have some general confusion on how to properly wire up the MP4.Demuxer.ISOM.
1. How do I decode/parse the output pads stream format downstream from the source? I haven't seen any examples or demos of transforming the
MP4.Payload.{AAC.AVC1}
2. How to properly handle dynamically attaching the output pads for each track to downstream elements? If I want to handle the :new_track
message to attach to some sink with pads in the :always
availability mode (such as the RTMP sink) I can't attach that track to some grouping of elements which end at the sink temporarily. For example, if I get the :new_track
notification for an AAC track I can't attach just the :audio
pad of the RTMP sink, because when handling that callback there is no video pad to attach....PortAudio plugin on Apple Silicon Mac
Not sure what the issue stems from. Opening a microphone source gives me:
```
** (Membrane.CallbackError) Error returned from Membrane.PortAudio.Source.handle_prepared_to_playing:
:pa_open_stream...
Confusion on video compositor timestamp offset
Hi 🙂 I'm working on a POC app and evaluating membrane for a project at work and I have some questions about how the timestamp offset for the video compositor plugin is supposed to work.
First the goal of the POC app is to take an arbitrary list of videos and time offsets and dynamically stitch them together into one continuous video (and then stream the result somewhere but I haven't gotten that far yet).
Some other members of my team chatted with some of the membrane contributors and they suggested we take a look at the video compositor plugin and provided us some skeleton code, however, I'm struggling to understand how the timestamp offset option that the compositor takes can be used to seek through the input. ...