Capture WebSocket audio file with Chrome Devtools - azure

I would like to know if it is possible to capture the audio that plays through the Websocket protocol using Chrome DevTools
Under Network, I do see a Websocket entry after the audio begins to play.
It is listed as wss://eastus.tts.speech.microsoft.com/cognitiveservices/websocket/v1?Authorization=RandomCharacterConnectionId=IDGoesHere
I checked it's Headers and Messages but it seems there is no way to get a hold of the audio file.
Any suggestions?
Example is here
https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/
Thanks

If you want to save the resulting audio as .mp3 file, just try code below:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Microsoft Cognitive Services Speech SDK JavaScript Quickstart</title>
<meta charset="utf-8" />
</head>
<body>
<button id="startSpeakTextAsyncButton" onclick="synthesizeSpeech()">speak</button>
<script src="microsoft.cognitiveservices.speech.sdk.bundle.js"></script>
<script>
function synthesizeSpeech() {
var speechConfig = SpeechSDK.SpeechConfig.fromSubscription("", "");
speechConfig.SpeechSynthesisLanguage = "en-US";
const synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig);
synthesizer.speakTextAsync(
"Hello , this is a demo about saving result of TTS API as a file",
result => {
synthesizer.close();
const link = document.createElement( 'a' );
link.style.display = 'none';
document.body.appendChild( link );
const blob = new Blob( [result.audioData], { type: 'audio/mp3' } );
const objectURL = URL.createObjectURL( blob );
link.href = objectURL;
link.href = URL.createObjectURL( blob );
link.download = 'data.mp3';
link.click();
},
error => {
console.log(error);
synthesizer.close();
});
}
</script>
</body>
</html>
structure:
Result:

Related

Chrome extension: DOMParser is not defined with Manifest v3

I have developped an extension to scrape some content from web page and up to now it was working fine but since I switched to manifest v3, the parsing doesn't work anymore.
I use the following script to read the source code:
chrome.scripting.executeScript(
{
target: {tabId: tab.id, allFrames: true},
files: ['GetSource.js'],
}, async function(results)
{
// GETTING HTML
parser = new DOMParser();
content = parser.parseFromString(results, "text/html");
... ETC ...
This code used to work fine but now I get the following message in my console:
Uncaught (in promise) ReferenceError: DOMParser is not defined
The code is part of a promise but I don't think the promise is the problem here. I basically need to load the source code into a variable so that I can parse it afterwards.
I've checked the documentation but I haven't found something mentionned that DOMParser was not going to work with v3.
Any idea?
Thanks
Since service workers don't have access to DOM, it's not possible for
an extension's service worker to access the DOMParser API or create an
to parse and traverse documents.
More detail
And I solve the problem by using library dom-parser.The code could be like this
import DomParser from "dom-parser";
const parser = new DomParser();
const dom = parser.parseFromString('you html string');
From the docs:
Since service workers don't have access to DOM, it's not possible for an extension's service worker to access the DOMParser API or create an to parse and traverse documents.
Using an external library just for doing what DomParser already does?? It is too heavy.
To work-around with it, we can use an offscreen document. It's just invisible webpage where you can run fetch, audio, DomParser, ... and communicate with background (service_worker) via chrome.runtime.
See an example below:
background.js
chrome.offscreen.createDocument({
url: chrome.runtime.getURL('xam.html'),
reasons: [chrome.offscreen.Reason.DOM_PARSER],
justification: 'reason for needing the document',
});
// Just a test
setTimeout(() => {
const onDone = (msg) => {
console.log(msg);
chrome.runtime.onMessage.removeListener(onDone);
};
chrome.runtime.onMessage.addListener(onDone);
chrome.runtime.sendMessage('from-background-page');
}, 3000);
xam.html
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<script src="xam.js">
</script>
</body>
</html>
xam.js
async function main() {
const v = await fetch('https://......dev/').then((t) => t.text());
const d = new DOMParser().parseFromString(v, 'text/html');
const options = Array.from(d.querySelector('select').options)
.map((v) => `${v.value}|${v.text}`)
.join('\n');
chrome.runtime.sendMessage(options);
}
chrome.runtime.onMessage.addListener(async (msg) => {
console.log(msg);
main();
});
manifest.json
"permissions": [
// ...
"offscreen"
]
https://developer.chrome.com/docs/extensions/reference/offscreen/
The extension's permissions carry over to offscreen documents, but extension API access is heavily limited. Currently, an offscreen document can only use the chrome.runtime APIs to send and receive messages; all other extension APIs are not exposed.
Notes:
I haven't tested how long this offscreen document alive.
Just sample codes, it should work. Customzie as your own cases.

Play MP4 videos in Node WebKit

I'm using nodebob for making a nodewebkit (nw.js) desktop app, but the MP4 videos wouldn't play in it ... what should I do?
I tried putting ffmpegso.dll in the same folder where my release exe is but no use
If i use webm video the following code works but for mp4 it says file not found
Here's the code I'm using
<html>
<head>
<meta charset='utf-8'>
<title>Secure Video Browsing</title>
<link rel="icon" type="image/png" href="hammer.png" />
<link rel="stylesheet" href="js/floplayer/skin/skin.css">
<!-- 3. flowplayer -->
<script src="js/floplayer/flowplayer.min.js"></script>
<script>
var encryptor = require('file-encryptor');
var $ = require('jquery');
var path = require('path');
var gui = require('nw.gui');
var win = gui.Window.get();
var key = 'My Super Secret Key';
var options = {algorithm: 'aes256'};
window.addEventListener('keydown', function (e) {
e.preventDefault();
});
var execPath = path.dirname(process.execPath);
$(document).ready(function () {
$(".close").click(function () {
win.close();
});
encryptor.decryptFile(execPath + '/wild.eng', execPath + '/wild.mp4', key, options, function (err) {
var container = document.getElementById("player");
flowplayer(container, {
clip: {
sources: [
{type: "video/mp4",
src: execPath + '/wild.mp4'}
]
}
});
});
});
</script>
</head>
<body>
<button class="close">Close</button>
<div id="player">
</div>
</body>
The easiest way to enable proprietary codecs is to download community binaries for the matching NW.js version number you are using:
https://github.com/iteufel/nwjs-ffmpeg-prebuilt/releases
More information is here:
https://nwjs.readthedocs.io/en/latest/For%20Developers/Enable%20Proprietary%20Codecs/

How can I deploy the auth0 app to bluemix

I am using a sample project from auth0.com to customize the login page for my app and enable social media login. However I encounter some problem when I try to deploy it to bluemix.
The video tutorial I follow is https://www.youtube.com/watch?v=sHhNoV-sS_I&t=559s
however the sample project is a little bit different from the one in video. It required the command "npm serve" to run it. When I push my project using cf push it shows noappdecked. How can I deploy my project to bluemix?
the app.js code and html code is like
<!DOCTYPE html>
<html>
<head>
<title>Auth0-VanillaJS</title>
<meta charset="utf-8">
<!-- Auth0 lock script -->
<script src="//cdn.auth0.com/js/lock/10.3.0/lock.min.js"></script>
<script src="auth0-variables.js"></script>
<script src="app.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<img alt="avatar" id="avatar" style="display:none;">
<p>Welcome <span id="nickname"></span></p>
<button type="submit" id="btn-login">Sign In</button>
<button type="submit" id="btn-logout" style="display:none;">Sign Out</button>
</body>
window.addEventListener('load', function() {
var lock = new Auth0Lock(AUTH0_CLIENT_ID, AUTH0_DOMAIN);
// buttons
var btn_login = document.getElementById('btn-login');
var btn_logout = document.getElementById('btn-logout');
btn_login.addEventListener('click', function() {
lock.show();
});
btn_logout.addEventListener('click', function() {
logout();
});
lock.on("authenticated", function(authResult) {
lock.getProfile(authResult.idToken, function(error, profile) {
if (error) {
// Handle error
return;
}
localStorage.setItem('id_token', authResult.idToken);
// Display user information
show_profile_info(profile);
});
});
//retrieve the profile:
var retrieve_profile = function() {
var id_token = localStorage.getItem('id_token');
if (id_token) {
lock.getProfile(id_token, function (err, profile) {
if (err) {
return alert('There was an error getting the profile: ' + err.message);
}
// Display user information
show_profile_info(profile);
});
}
};
var show_profile_info = function(profile) {
var avatar = document.getElementById('avatar');
document.getElementById('nickname').textContent = profile.nickname;
btn_login.style.display = "none";
avatar.src = profile.picture;
avatar.style.display = "block";
btn_logout.style.display = "block";
};
var logout = function() {
localStorage.removeItem('id_token');
window.location.href = "/";
};
retrieve_profile();
});
You would use the package.json method documented at https://console.ng.bluemix.net/docs/runtimes/nodejs/index.html#nodejs_runtime , first to declare the serve package as one of your dependencies, then to indicate what the scripts.start script should do (which is run npm serve). You can use npm init (https://docs.npmjs.com/cli/init) to create a starting package.json file if you don't already have one.

Web Audio API: decodeAudioData doesn't decode opus in Chrome

I'm currently trying to make Opus packets work with Web Audio API. The problem is however, while it should be natively supported by FireFox and Chrome, only FireFox can decode a stream of OPUS samples using decodeAudioData from the Web Audio API. Chrome does recognize the file when I drag the opus file into the browser and it also does play it! So I'm wondering that I may be doing something wrong here causing failure in Chrome.
Then I used some sample code from http://awm.jp/~yoya/js/audio/meow.html just load an opus file and try to decode it. Again Firefox does, and Chrome doesn't. So I'm wondering if someone can confirm my finding or tell me what I'm doing wrong here. Below is the modified version from the earlier link. Thanks!
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-15">
<title> decodeAudioData sample </title>
</head>
<script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
<script type="text/javascript">
//$(document).ready(function() {
var catMeowingBuffer = null;
window.AudioContext = window.AudioContext||window.webkitAudioContext;
var context = new AudioContext();
function onError(err) {
console.log("unable to decode");
}
function loadCatSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
catMeowingBuffer = buffer;
var src = context.createBufferSource();
src.buffer = catMeowingBuffer
src.connect(context.destination);
src.start(0);
}, onError);
}
request.send();
}
loadCatSound("opus.opus");
function playCatSound() {
if (catMeowingBuffer !== null) {
var src = context.createBufferSource();
src.buffer = catMeowingBuffer
src.connect(context.destination);
src.start(0);
}
}
//});
</script>
<body>
<h1> decodeAudioData sample </h1>
<button onclick="playCatSound();"> playCatSound </button>
<hr>
<address></address>
</body> </html>
This is a bug in Chrome. See https://code.google.com/p/chromium/issues/detail?id=409402.

Consuming a Stream create using Node.JS

I have an application, which streams an MP3 using Node.JS. Currently this is done through the following post route...
app.post('/item/listen',routes.streamFile)
...
exports.streamFile = function(req, res){
console.log("The name is "+ req.param('name'))
playlistProvider.streamFile(res, req.param('name'))
}
...
PlaylistProvider.prototype.streamFile = function(res, filename){
res.contentType("audio/mpeg3");
var readstream = gfs.createReadStream(filename, {
"content_type": "audio/mpeg3",
"metadata":{
"author": "Jackie"
},
"chunk_size": 1024*4 });
console.log("!")
readstream.pipe(res);
}
Is there anyone that can help me read this on the client side? I would like to use either JPlayer or HTML5, but am open to other options.
So the real problem here was, we are "requesting a file" so this would be better as a GET request. In order to accomplish this, I used the express "RESTful" syntax '/item/listen/:name'. This then allows you to use the JPlayer the way specified in the links provided by the previous poster.
I'm assuming you didn't bother visiting their site because had you done so, you would have seen several examples of how to achieve this using HTML5/JPlayer. The following is a bare-bones example provided by their online developer's documentation:
<html>
<head>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.6/jquery.min.js"></script>
<script type="text/javascript" src="/js/jquery.jplayer.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$("#jquery_jplayer_1").jPlayer({
ready: function() {
$(this).jPlayer("setMedia", {
mp3: "http://www.jplayer.org/audio/mp3/Miaow-snip-Stirring-of-a-fool.mp3"
}).jPlayer("play");
var click = document.ontouchstart === undefined ? 'click' : 'touchstart';
var kickoff = function () {
$("#jquery_jplayer_1").jPlayer("play");
document.documentElement.removeEventListener(click, kickoff, true);
};
document.documentElement.addEventListener(click, kickoff, true);
},
loop: true,
swfPath: "/js"
});
});
</script>
</head>
<body>
<div id="jquery_jplayer_1"></div>
</body>
</html>

Resources