Simple Alexa Audioplayer Directives Using Node.js - node.js

I'm having a problem finding simple examples that handle Alexa audioplayer events using node.js. So far, I've only been able to get the play directive to work using the following:
this.response.audioPlayerPlay('REPLACE_ALL', 'URL', 0);
this.emit(':responseReady');
How do I stop the audio from playing? Any time I try to trigger the StopIntent during playback by saying "stop", it triggers the PauseIntent. I want to be able to stop playback and end the session, pause playback, and resume playback. I've looked at the examples on GitHub and I haven't found them very helpful.

this.response.audioPlayerStop(); //this part will stop the audio player from playing the current stream.

Related

Autoplay media until time's up in google dialogflow

I have working on google actions lately. I need to play an url of mp3 for x minute(this receive from user such as play abc for 30 minutes.). My problem is my mp3 url is only 1.30 minutes long. How can I play it until x minutes. I use dialogflow index.js file for code. Here is what I try
app.intent('soundplay', (conv, {soundreq, duration}) => {
conv.ask(new Suggestions('Exit'));
conv.ask(new SimpleResponse({
speech: 'xxx',
text: 'xxx',
}));
conv.ask(sound);
var d = duration.milliseconds;
setTimeout(sound, d);
});
I try to use setTimeout but it also doesn't work.
There are a few issues you need to deal with.
The first is that sound is pretty vague from your code sample. You're using it in setTimeout(), which suggests that it is a function, but you're also passing it to conv.ask() which suggests it is a MediaResponse or some other object.
The second is that this code will run on your server, not on the user's device, and Actions work in a conversational back-and-forth model. So once you send something to the user, you need to wait until the user (or the user's device) sends you another message that you can reply with.
The solution is to include a MediaObject as part of the response you build. This will include the URL of the audio you want to play, along with a title and some other information.
When the audio finishes playing, your Dialogflow agent will get a message with an actions_intent_MEDIA_STATUS Event. You can create an Intent that handles this event and, in the Intent Handler for it in your webhook, check to see if the time has expired. If it has, you can prompt for what to do now or end the conversation or whatever is appropriate. If it has not expired, you can play the audio again using another MediaObject.

socket.io-stream not exposing .to call

I'm building a camera streaming platform that uses navigator.getUserMedia. The clients seem to broadcast their video streams without error. The code (on clients) for doing so looks like this:
navigator.mediaDevices.getUserMedia({
audio: false, //We don't need audio at the moment
video: true
}).then(function(stream) {
ss(socket).emit("BroadcastStream", stream);
}).catch(err) {
//Code for handling error
});
However my Node.JS server handling the stream (and sending it to the other client) throws this error:
TypeError: socket_stream(...).to is not a function
It seems socket.io-stream isn't exposing the .to function. I know the argument to socket_stream (reference to the socket.io instance) is valid; and socket.io-stream's documentation seems to agree with this (there is no mention of .to)
How would I go around resolving this?
EDIT:
I am open to suggestions (even using a different method altogether; but leave that as a last resort)
Alright, nevermind (after a month later), I found A Dead Simple WebRTC Example which showed me the basics of using WebRTC (without STUN servers, which I kind of needed for this project), which I adapted to my specific needs. Great job to Shane Tully on that tutorial!

Need help for audio conference using Kurento composite media element in Nodejs

I am refereeing the code from GitHub for audio AND video conference using Kurento composite media element, It work's fine for audio AND video streaming over WebRTC.
But I need only audio conference using WebRTC, I have added changes in above GitHub code and new code is uploaded on GitHub Repository.
I have added below changes in static/js/index.js file
var constraints = {
audio: true, video: false
};
var options = {
localVideo: undefined,
remoteVideo: video,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
When I am running this code, no error for node server as well as on chrome console. But audio stream does not get start. It only showing spinner for long time. Chrome console log is here.
As per reply for my previous stack overflow question, We need to specify MediaType.AUDIO in java code like below
webrtc.connect(hubport, MediaType.AUDIO);
hubport.connect(webrtc, MediaType.AUDIO);
But I want to implementing it in Nodejs using kurento-client.js, I did not get any reference to set MediaType.AUDIO to connect with hubPort and webRtcEndpoint in Nodeja API.
Please someone can help me to do code changes for same in Nodejs or suggest me any reference so I can implement only audio conference using composite media element and Nodejs.
This should do
function connectOnlyAudio(source, sink, callback) {
source.connect(sink, "AUDIO" , function(error) {
if (error) {
return callback(error);
}
return callback(null);
});
}
We are in the process of improving the documentation of the project. I hope that this will all be made more clear in the new docs.
EDIT 1
It is important to make sure that you are indeed sending something, and that the connection between your client and the media server is negotiated correctly. Going through your bower.json, I've found that you are setting the adapter dependency as whatever, so to speak. In the latest releases, they've done some refactoring that makes the kurento-utils-js library fail. We haven't yet adapted to the new changes, so you need to fix the dependency of adapter.js like so
"adapter.js": "v0.2.9"

Capturing desktop video and microphone audio from a chrome extension

I am using the navigator.webkitGetUserMedia API to capture the desktop and using microphone to capture audio. When I make the following call
navigator.webkitGetUserMedia({
audio:true,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: id,
maxWidth:screen.width,
maxHeight:screen.height}
}
}, gotStream, getUserMediaError);
I am getting a screen capture error. Does this API not support the above scenario?
I am able to capture audio and desktop video individually but not together. Also, since I am capturing desktop and not the webcam video, does that make any difference?
Chrome does not allow you to request an audio stream alongside a chromeMediaSource.
See Why Screen Sharing Fails here for more info.
You may be able to circumvent this by sending individual getUserMedia requests - one for the audio stream and one for desktop.

Play Audio from receiver website

I'm trying to get my receiver to play an mp3 file hosted on the server with the following function
playSound_: function(mp3_file) {
var snd = new Audio("audio/" + mp3_file);
snd.play();
},
However, most of the time it doesn't play and when it does play, it's delayed. When I load the receiver in my local browser, however, it works fine.
What's the correct way to play audio on the receiver?
You can use either a MediaElement tag or Web Audio API. Simplest is probably a MediaElement.

Resources