kamilstasiak
kamilstasiak
SMSoftware Mansion
Created by noozo on 6/17/2024 in #membrane-help
WebRTC no audio in incoming audio tracks
The example app that I sent you creates an audio and video element for every stream/track, so it's impossible to override anything. After two trackReady events, I have two audio and two video elements. A MediaStream with an audio track is assigned to both the audio and video elements (even though there is no video track), and vice versa. On the other hand, if I understand your code correctly, you're creating an audio and video element for each endpoint, so if you receive two tracks from the same endpoint, either the audio overrides the video or the video overrides the audio because you always override both of them. Probably the second stream has only a video track, so it "nullifies" the audio element
8 replies
SMSoftware Mansion
Created by noozo on 6/17/2024 in #membrane-help
WebRTC no audio in incoming audio tracks
Hi, I've tested it for you in this PR: https://github.com/fishjam-dev/ts-client-sdk/pull/54 So you can compare this code to yours. If I understand your code correctly, I have an idea. You're creating a video and audio element for every Endpoint, and then attaching that stream to those elements in the trackReady handler. trackReady is invoked for every track, and you have two of them: one with audio and one with video. The video stream contains only a video track, and the audio stream contains only an audio track. So maybe you receive the audio stream first, assign it to the audio and video elements srcObject, and then do the same with the video. In that case, the video stream will override srcObject in the audio element
8 replies
SMSoftware Mansion
Created by noozo on 6/17/2024 in #membrane-help
WebRTC no audio in incoming audio tracks
Hi, keep in mind that the video track and audio track come in different streams. So you probably need to add both of them to one MediaStream in order to use one video element. Or maybe you need to handle the onloadedmetadata event on your audio/video element:
video.srcObject = mediaStream;
video.onloadedmetadata = function(e) {
video.play();
};
video.srcObject = mediaStream;
video.onloadedmetadata = function(e) {
video.play();
};
Browsers block autoplay when there are no interactions from the user or when the muted attribute is not present. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#autoplay Could you provide a code snippet of your frontend app?
8 replies
SMSoftware Mansion
Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
14 replies
SMSoftware Mansion
Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
Hi, yes, it solves a majority of problems and simplifies the code significantly. I've removed a lot of code in jellyfish-videoroom. I believe I'll finish releasing it today
14 replies
SMSoftware Mansion
Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
Right now, our device manager (for toggling camera, microphone, and screen share) doesn’t emit any events. As a developer, you need to listen to state changes to be able to detect important events like “camera track is available” or “camera stopped”. React's strict mode invokes useEffect two times, complicating matters even further. We plan to emit events on every important state change. I want to propose a fix that would include events in it during the following week, so if you could wait a few days, I would suggest doing so. If you can’t wait, I have a few ideas: - You could disable useMembraneMediaStreaming and set autoStreaming: true, preview: false in useSetupMedia. It should automatically add media tracks when the user connects to the Jellyfish. - If you have problems only in development mode, you could disable strict mode for a while and enable it when we release an update. - You could use useRef to store some additional data that will help you identify if a particular track is in the right state.
14 replies
SMSoftware Mansion
Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
What's more, we’re currently working on a connecting mechanism. We plan to fix some errors as well as add the ability to reconnect.
14 replies
SMSoftware Mansion
Created by Jdyn on 3/6/2024 in #membrane-help
On JF Tracks and Reconnecting (in React)
No description
14 replies
SMSoftware Mansion
Created by cohoat_mortoai on 2/19/2024 in #membrane-help
Unity client?
Hi, as Mat said, we don’t have a C# client yet. I want to clarify some topics about our SDK, which may help you: - To minimize the number of race conditions, we decided to handle only one renegotiation at a time. That's why a message queue is implemented there. It could be done better, but we haven't had time to refactor it yet. We implemented it on top of our old code, so not everything is as simple as it could be. Considering how JS handles multithreading, such a simple solution sufficed for us. I suspect that in C#, this will need to be modeled differently. - In the current implementation, it’s not possible to add both audio and video tracks in one renegotiation cycle. We plan to implement it in the future. - I would advise against rewriting the code 1:1 from JS because it requires more refactoring, especially concerning private functions. We are aware that there is significant room for improvement here. - There is a schema of RTC engine events written in ZOD https://github.com/jellyfish-dev/membrane-webrtc-js/blob/master/test/schema.ts - This API is incomplete; we need to expose, for example, disabledEncodings or getStats() from the connection RTCPeerConnection. - Lately, we implemented a metadata parser, and currently, it's not well-documented, so I wouldn't bother with it. If you have any questions regarding the decisions made, the code, or if something is exceptionally unclear, please let us know
7 replies