Need help for audio conference using Kurento composite media element in Nodejs - node.js

I am refereeing the code from GitHub for audio AND video conference using Kurento composite media element, It work's fine for audio AND video streaming over WebRTC.
But I need only audio conference using WebRTC, I have added changes in above GitHub code and new code is uploaded on GitHub Repository.
I have added below changes in static/js/index.js file
var constraints = {
audio: true, video: false
};
var options = {
localVideo: undefined,
remoteVideo: video,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
When I am running this code, no error for node server as well as on chrome console. But audio stream does not get start. It only showing spinner for long time. Chrome console log is here.
As per reply for my previous stack overflow question, We need to specify MediaType.AUDIO in java code like below
webrtc.connect(hubport, MediaType.AUDIO);
hubport.connect(webrtc, MediaType.AUDIO);
But I want to implementing it in Nodejs using kurento-client.js, I did not get any reference to set MediaType.AUDIO to connect with hubPort and webRtcEndpoint in Nodeja API.
Please someone can help me to do code changes for same in Nodejs or suggest me any reference so I can implement only audio conference using composite media element and Nodejs.

This should do
function connectOnlyAudio(source, sink, callback) {
source.connect(sink, "AUDIO" , function(error) {
if (error) {
return callback(error);
}
return callback(null);
});
}
We are in the process of improving the documentation of the project. I hope that this will all be made more clear in the new docs.
EDIT 1
It is important to make sure that you are indeed sending something, and that the connection between your client and the media server is negotiated correctly. Going through your bower.json, I've found that you are setting the adapter dependency as whatever, so to speak. In the latest releases, they've done some refactoring that makes the kurento-utils-js library fail. We haven't yet adapted to the new changes, so you need to fix the dependency of adapter.js like so
"adapter.js": "v0.2.9"

Related

Unable to Play Video or Media related Files in Testcafe with Javascript

Video Related scripts are not getting played using testcafe.
Click on Play button - Success
Play video - (It is just loading)
Is there any workaround to make video play using testcafe?
I cannot reproduce your case with this test:
fixture`Fixture`
.page`https://www.devexpress.com/products/testcafestudio/`;
test('test', async (t) => {
await t.switchToIframe('iframe')
await t.click('.ytp-large-play-button.ytp-button.ytp-large-play-button-red-bg')
await t.debug();
})
It works fine. Please share a simple sample. If you don't want to share your example here, you can do it with a template on GitHub. Try to follow this instruction when creating an example.

capture video conference between 1:1 user and broadcast to an rtmp url

I am currently working on nodejs and socket app that is currently doing 1:1 video conference that is using webrtc. Videos are two separate element in the html and i would like to merge them together so that i can broadcast to rtmp url for public view (2:many). Is this possible
For webrtc, i followed this tutorial https://www.youtube.com/watch?v=DvlyzDZDEq4, and for broadcasting i am using ffmpeg which current does 1 video stream.
Please confirm if this is doable
Update
I was able to merge the video using
https://www.npmjs.com/package/video-stream-merger
And now the final issue
i am receiving merger.result which is merged stream and I tried to create a mediaRecorder object. Callback method for MediaRecorder ondataavailable is called only once but not every 250ms which i need to broadcast to youtube. How can i do this?
var merger = new VideoStreamMerger(v_opts);
...
...
merger.start()
myMediaRecorder = new MediaRecorder(merger.result);
myMediaRecorder.start(250);
myMediaRecorder.ondataavailable = function (e) {
console.log("DataAvailable")
//socket.emit("binarystream", e.data);
state = "start";
//chunks.push(e.data);
}
So you're looking for many peers. This is possible - please see the below links for reference.
WebRTC: https://webrtc.github.io/samples/src/content/peerconnection/multiple/
StackOverflow reference: webRTC multi-peer connection (3 clients and above)
GitHub reference: https://github.com/Dirvann/webrtc-video-conference-simple-peer
https://deepstream.io/tutorials/webrtc/webrtc-full-mesh/

google nest hub can't play hls

My Question is
I push HLS steram to gnh(google nest hub) by action.devices.commands.GetCameraStream response format.gnh do nothing but show loading UI some seconds.
It's somthing wrong with my HLS file?
How to get log from gnh to help me debug?
As I know
I am tried to push mp4(1080p/under 60 fps) url to gnh, that's work well.
I am tried to convert mp4 to hls by some lib,include ffmpeg,Bento4.
Here is my JSON send to gnh:
{
"payload": {
"commands": [{
"status": "SUCCESS",
"states": {
"cameraStreamAccessUrl": "http:/path/of/steram.m3u8"
},
"ids": ["....."]
}]
},
"requestId": "My_Request_Id"
}
It seems that you are missing the required property cameraStreamSupportedProtocols. Try adding the protocol and see if you are able to get the stream to work. This will load the default cast camera receiver since you are trying to play HLS content. If you are still seeing an issue with playback, it could be that your stream is malformed and needs to be revised.
Playback logs will only be available to you if you create your own basic receiver app and specify this in your response using the cameraStreamReceiverAppId property. To see more about creating a cast receiver app refer to the overview page (https://developers.google.com/cast/docs/web_receiver) and how to create a basic receiver (https://developers.google.com/cast/docs/web_receiver/basic) for more information. We also do have a default camera receiver sample located in our sample github (https://github.com/googlecast/CastCameraReceiver)

socket.io-stream not exposing .to call

I'm building a camera streaming platform that uses navigator.getUserMedia. The clients seem to broadcast their video streams without error. The code (on clients) for doing so looks like this:
navigator.mediaDevices.getUserMedia({
audio: false, //We don't need audio at the moment
video: true
}).then(function(stream) {
ss(socket).emit("BroadcastStream", stream);
}).catch(err) {
//Code for handling error
});
However my Node.JS server handling the stream (and sending it to the other client) throws this error:
TypeError: socket_stream(...).to is not a function
It seems socket.io-stream isn't exposing the .to function. I know the argument to socket_stream (reference to the socket.io instance) is valid; and socket.io-stream's documentation seems to agree with this (there is no mention of .to)
How would I go around resolving this?
EDIT:
I am open to suggestions (even using a different method altogether; but leave that as a last resort)
Alright, nevermind (after a month later), I found A Dead Simple WebRTC Example which showed me the basics of using WebRTC (without STUN servers, which I kind of needed for this project), which I adapted to my specific needs. Great job to Shane Tully on that tutorial!

Is it possible to get the currently playing track in the 1.x apps api?

I am trying to update my Spotify remote control app that is currently using the legacy API to use the new 1.x API. Is it possible using the 1.x API to access information about the currently playing track? models.player.track does not seem to exist anymore (though it's in the documentation).
For the curious, I am using this for my app running in Spotify Desktop which uses websockets to talk with a Python server, which then provides a web interface for phones and tablets to remotely control the instance of Spotify running on the desktop. This works great using the legacy API and I can control playback and get the now playing info from any connected remote. I assume this app is going to stop working at some point soon since Spotify says they are retiring the legacy API. (Unless my assumption that the app will stop working is wrong, then never mind).
Thanks.
It is possible to access the current playing track loading the track property of the Player.
You would do something like this:
require(['$api/models'], function(models) {
function printStatus(track) {
if (track === null) {
console.log('No track currently playing');
} else {
console.log('Now playing: ' + track.name);
}
}
// update on load
models.player.load('track').done(function(p) {
printStatus(p.track);
});
// update on change
models.player.addEventListener('change', function(p) {
printStatus(p.data.track);
});
});
You have a working example in the Tutorial App named Get the currently playing track.

Resources