I would like to play a short sound for a more amusing output. If I understand the documentation correctly it should be possible with a reply in api.ai of something like this SSML:
<speak>Okay here we go: <audio src="http://example.com/boing.wav">boing</audio>. You are welcome!</speak>
Just for reference SSML means Speech Synthesis Markup Language.
The web simulator don't play this sound instead all tags seems to be stripped out. Is that not supported yet or did I do something wrong?
The src URL must also be an https URL (Google Cloud Storage can host your audio files on an https URL).
https://developers.google.com/actions/reference/ssml
Without seeing your source, there are a few possible reasons:
The audio file must be served publicly via HTTPS, not HTTP. See the description for <audio> on https://developers.google.com/actions/reference/ssml
The audio file should be in a correct format (see https://developers.google.com/actions/reference/ssml again).
If you're returning it via the webhook response, you need to make sure you set the data.google.is_ssml property in the JSON to true in https://developers.google.com/actions/reference/webhook-format#response
I have the following for my node.js server which works (well, except for the URL):
var msg = `
<speak>
Tone one
<audio src="https://examaple.com/wav/Dtmf-1.wav"></audio>
Tone two
<audio src="https://example.com/wav16/Dtmf-2.wav"></audio>
Foghorn
<audio src="https://example.com/mp3/foghorn.mp3"></audio>
Done
</speak>
`;
var reply = {
speech: msg,
data:{
google:{
"expect_user_response": true,
"is_ssml": true
}
}
};
res.send( reply );
So here is what I have for the code. It is in the Text responds field one my intent.
<speak> One second <break time="3s"/> OK, I have used the best quantum processing algorithms known to computer science! Your silly name is $color $number. I hope you like it. <audio src="https://www.partnersinrhyme.com/files/sounds1/WAV/sports/baseball/Ball_Hit_Cheer.wav"></audio> </speak>
It does not work in the testing area of the api(dot)ai field, but does work when I turn on the integration and try it at the Google simiulator. here: https://developers.google.com/actions/tools/web-simulator
Related
So I don't currently have any code but its just a general question. I've seen multiple articles and SO Questions that handle this issue except that in all of those the byte range header, that essentially specifies what time segment of the video is sent back to the client, is also specified by the client. I want the server to keep track of the current video position and stream the video back to the client.
The articles and SO Questions I've seen for reference:
https://blog.logrocket.com/build-video-streaming-server-node/
Streaming a video file to an html5 video player with Node.js so that the video controls continue to work?
Here's a solution that does not involve explicitly changing the header for the <video> tag src request or controlling the byte range piped from the server.
The video element has a property called currentTime (in seconds) which allows the client to control its own range header. A separate request to the server for an initial value for 'currentTime' would allow the server to control the start time of the video.
<video id="video"><source src="/srcendpoint" muted type="video/mp4" /></video>
<script>
const video = document.getElementById("videoPlayer")
getCurrentSecAndPlay()
async function getCurrentSecAndPlay() {
let response = await fetch('/currentPosition').then(response => response.json())
video.currentTime += response.currentSec
video.play()
}
</script>
This is sort of a work-around solution to mimic livestream behaviour. Maybe it's good enough for you. I do not have a lot of knowledge on HLS and RTMP but if you wanted to make a true livestream you should study those things.
sources:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
https://www.w3.org/2010/05/video/mediaevents.html
I have implemented an Alexa Audio Player skill which plays the audio just fine, but when played on an Echo Show, the name of the song does not show on the display.
I see the documents on Amazon (https://amzn.to/2xzpH4u) refer to a play directive which includes MetaData such as the background image and such, but I'm not sure how to set this up in node.js.
This is the code snippet from my Play intent handler:
if (this.event.request.type === 'IntentRequest' || this.event.request.type === 'LaunchRequest') {
var cardTitle = streamDynamic.subtitle;
var cardContent = streamDynamic.cardContent;
var cardImage = streamDynamic.image;
this.response.cardRenderer(cardTitle, cardContent, cardImage);
}
this.response.speak('Enjoy.').audioPlayerPlay('REPLACE_ALL', streamDynamic.url, streamDynamic.url, null, 0);
this.emit(':responseReady');
In your If statement meaning video rendering is supported you build the content of the metadata for the card that is rendered on the device.
So following the documentation cardTitle, Content and Image all have got to be what you are looking for the device to render as a card. You are returning it to be rendered in the this.response statement when all the resources have been provided.
In the example code from Amazon https://github.com/alexa/skill-sample-nodejs-audio-player/blob/mainline/single-stream/lambda/src/audioAssets.js notice how the card assets are specified. Follow this example and look at the whole project for any other pieces you may be missing.
We've got a really annoying bug when trying to send mp3 data. We've got the following set up.
Web cam producing aac -> ffmpeg convert to adts -> send to nodejs server -> ffmpeg on server converts adts to mp3 -> mp3 then streamed to browser.
This works *perfectly" on Linux ( chrome with HTML5 and flash, firefox flash only )
However on windows the sound just "stalls", no matter what combination we use ( browser/html5/flash ). If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
Any help would be greatly appreciated.
Relevant code in node
res.setHeader('Connection', 'Transfer-Encoding');
res.setHeader('Content-Type', 'audio/mpeg');
res.setHeader('Transfer-Encoding', 'chunked');
res.writeHeader('206');
that.eventEmitter.on('soundData', function (data) {
debug("Got sound data" + data.cameraId + " " + req.params.camera_id);
if (req.params.camera_id == data.cameraId) {
debug("Sending data direct to browser");
res.write(data.sound);
}
});
Code on browser
soundManager.setup({
url: 'http://dashboard.agricamera.co.uk/themes/agricamv2/swf/soundmanager2.swf',
useHTML5Audio: false,
onready: function () {
that.log("Sound manager is now ready")
var mySound = soundManager.createSound({
url: src,
autoLoad: true,
autoPlay: true,
stream: true,
});
}
});
If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
That's exactly what's happening.
First off, chrome can play ADTS streams so if possible, just use that directly and save yourself some audio quality by not having to use a second lossy codec in the chain.
Next, don't use soundManager, or at least let it use HTML5 audio. You don't need the Flash fallback these days in most cases, and Chrome is perfectly capable of playing your streams. I suspect this is where your problem lies.
Next, try disabling chunked transfer. Many clients don't like transfer encoding on streams.
Finally, I have seen cases where Chrome's built-in media handling (which I believe varies from OS to OS) cannot sync to the stream. There are a few bug tickets out there for Chromium. If your playback timer isn't incrementing, this is likely your problem and you can simply try to reload the stream programmatically to work around it.
I'm developing a web crawler in nodejs. I've created a unique list of the urls in the website crawle body. But some of them have extensions like jpg,mp3, mpeg ... I want to avoid crawling those who have extensions. Is there any simple way to do that?
Two options stick out.
1) Use path to check every URL
As stated in comments, you can use path.extname to check for a file extension. Thus, this:
var test = "http://example.com/images/banner.jpg"
path.extname(test); // '.jpg'
This would work, but this feels like you'll wind up having to create a list of file types you can crawl or you must avoid. That's work.
Side note -- be careful using path. Typically, url is your best tool for parsing links because path is aimed at files/directories, not urls. On some systems (Windows), using path to manipulate a url can result in drama because of the slashes involved. Fair warning!
2) Get the HEAD for each link & see if content-type is set to text/html
You may have reasons to avoid making more network calls. If so, this isn't an option. But if it is OK to make additional calls, you could grab the HEAD for each link and check the MIME type stored in content-type.
Something like this:
var headersOptions = {
method: "HEAD",
host: "http://example.com",
path: "/articles/content.html"
};
var req = http.request(headersOptions, function (res) {
// you will probably need to also do things like check
// HTTP status codes so you handle 404s, 301s, and so on
if (res.headers['content-type'].indexOf("text/html") > -1) {
// do something like queue the link up to be crawled
// or parse the link or put it in a database or whatever
}
});
req.end();
One benefit is that you only grab the HEAD, so even if the file is a gigantic video or something, it won't clog things up. You get the HEAD, see the content-type is a video or whatever, then move along because you aren't interested in that type.
Second, you don't have to keep track of file names because you're using a standard MIME type to differentiate html from other data formats.
Currently my audio won't play on safari and on mobile devices.
It works fine on a normal pc on FireFox, Chrome and IE
var manifest = [
{ id: "correct", src: 'assets/correct.mp3|assets/correct.ogg' },
{ id: "wrong", src: 'assets/wrong.mp3|assets/wrong.ogg' }
];
var queue = new createjs.LoadQueue();
queue.installPlugin(createjs.Sound);
queue.loadManifest(manifest, true);
And I'm calling the play function like this;
createjs.Sound.play("correct");
This function is written inside a function that's called when a user presses a div.
That code looks like it should work. Web Audio is initially muted on iOS devices, but when play is called inside of a user event it unmutes.
There are a couple of possibilities (without seeing the rest of the code):
You are working on iPad 1, which does not support web audio and has html audio disabled by default due to severe limitations.
You are not waiting for the audio to finish loading before calling play:
queue.addEventListener("complete", loadComplete);
The audio file path is incorrect and therefore the load is failing, which you can detect by listening for an error event.
You are using a non-default encoding for the mp3 files that is not supported by Safari. Generally that would break in other browsers as well though.
Safari requires quicktime for html audio to play, so that could be a problem.
Using createjs.Sound.registerPlugins, SoundJS is being set to use an unsupported on mobile plugin, such as FlashPlugin. You can check your current plugin with:
createjs.Sound.activePlugin.toString();
You may find the Mobile Safe Tutorial useful. Hope that helps.
There is a way to hack it, play an empty mp3 then play the audio.
It must load the empty mp3 within mainfest array firstly:
var manifest = [
...
{ id: "empty", src: 'assets/empty.mp3|assets/empty.ogg' }
];
...
Before playing the sound, play the empty mp3:
createjs.Sound.play("empty");
createjs.Sound.play("correct");