WebRTC no audio in incoming audio tracks
We are implementing a video room, similar to the examples in web_rtc_engine (the main difference is using a LiveView to broker the connection betweem client and room (genserver)) and for some reason we get no audio (on Chrome at least) for the other endpoints (local audio, when unmuted in JS, works fine).
Possibly related, we see that the controls for volume of the other endpoints are greyed out, as seen in screenshot.
Follow up question: is the audio HTML tag needed at all, since the video tag can also play video, or does that affect the tracks that arrive at the RTC engine endpoints?
Thanks in advance,
Pedro
7 Replies
Hi, keep in mind that the video track and audio track come in different streams. So you probably need to add both of them to one
MediaStream
in order to use one video element.
Or maybe you need to handle the onloadedmetadata
event on your audio/video element:
Browsers block autoplay
when there are no interactions from the user or when the muted
attribute is not present.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#autoplay
Could you provide a code snippet of your frontend app?Here it is.
@kamilstasiak any idea? 🙂
Hi, I've tested it for you in this PR: https://github.com/fishjam-dev/ts-client-sdk/pull/54
So you can compare this code to yours.
If I understand your code correctly, I have an idea. You're creating a video and audio element for every Endpoint, and then attaching that stream to those elements in the
trackReady
handler. trackReady
is invoked for every track, and you have two of them: one with audio and one with video. The video stream contains only a video track, and the audio stream contains only an audio track. So maybe you receive the audio stream first, assign it to the audio and video elements srcObject
, and then do the same with the video. In that case, the video stream will override srcObject
in the audio elementit's very much possible, but are we doing anything different from the example video_room then? Or does the example video_room also suffer from the same problem?
The example app that I sent you creates an audio and video element for every stream/track, so it's impossible to override anything. After two trackReady events, I have two audio and two video elements. A
MediaStream
with an audio track is assigned to both the audio and video elements (even though there is no video track), and vice versa.
On the other hand, if I understand your code correctly, you're creating an audio and video element for each endpoint, so if you receive two tracks from the same endpoint, either the audio overrides the video or the video overrides the audio because you always override both of them.
Probably the second stream has only a video track, so it "nullifies" the audio elementahh, that makes sense