Google TTS in Crossrider - google-chrome-extension

i developed Google chrome extension that contains Google TTS
i rewrite it with Crossrider to make it work in different platforms (it works great untill it comes to TTS part)
here is the code :
function PlayGoogleTTS(EngWord){
voices = speechSynthesis.getVoices();
msg = new SpeechSynthesisUtterance();
msg.volume = 1; // 0 to 1
msg.rate = 10; // 0.1 to 10
msg.pitch = 2; //0 to 2
msg.text = EngWord;
msg.lang = 'en-US';
msg.voice = voices[1];
msg.voice = voices[1]; // Note: some voices don't support altering params
speechSynthesis.speak(msg);
}
// Fetch the list of voices and populate the voice options.
function loadVoices() {
// Fetch the available voices.
var voices = speechSynthesis.getVoices();
}
// Chrome loads voices asynchronously.
window.speechSynthesis.onvoiceschanged = function(e) {
loadVoices();
};
so how can i convert it to make it work on Crossrider?

It's not clear from your question which speechSynthesis library/api you are using. However, assuming it is based on Chrome's TTS API, the required "tts" permission is not available.
[Disclosure: I am a Crossrider employee]

it's considered workaround more than an answer
just used another TTS that able to generate ogg maves in firefox

Related

Filtering and mixing WebRTC sound in NodeJs

Few weeks ago I wrote WebRTC browser client using mediasoup library. Now I am in middle of rewriting it as a NodeJS client.
I am stuck with one thing. I want to receive multiple WebRTC audio souroces, mix them into single track then apply some filters(eg. biquad filter) and then resend this track via WebRTC. Like on this diagram:
With browser I could achieve this using Web Audio API this is the code I used:
this.audioContext = new AudioContext();
this.outgoingStream = this.audioContext.createMediaStreamDestination();
this.addSoundFilter();
this.mixedTrack = this.outgoingStream.stream.getAudioTracks()[0];
this.handleIncomingSound();
addSoundFilter() {
this.filter = this.audioContext.createBiquadFilter();
this.filter.type = "lowpass";
this.filter.frequency.value = this.mapFrequencyValue();
this.gainer = this.audioContext.createGain();
this.gainer.gain.value = this.mapGainValue();
}
handleIncomingSound() {
this.audios.forEach((audio, peerId) => {
this.filterAudio(audio);
});
}
filterAudio(audioConsumerId) {
let audio = this.audios.get(audioConsumerId);
const audioElement = document.getElementById(audio.id);
const incomingSource = this.audioContext.createMediaStreamSource(
audioElement.srcObject
);
incomingSource.connect(this.filter).connect(this.gainer).connect(this.outgoingStream);
}
And with this code I could then send this.mixedTrack via WebRTC
However in NodeJS there is no WebAudioApi.
So how this can be achieved or if it's even possible to do it?

How do I get Instagram posts from several accounts?

I would like to show the latest instagram posts from three different instagram users in one app. I control the instagram accounts, so it wouldn't be a problem to use APIs that require the user to accept access.
One method would be to add ?__a=1 at the end of their profile to get at json that contains this information, show the title as text in my app and load the picture from Instagram's CDN.
From what I can see, this isn't allowed by Instagram's terms, so I could easily see them banning the whole thing after some time.
Using Instagram's APIs (both Basic Display API or Graph) looks doubtful in an app, since they are based on tokens that should be kept server side.
Potentially I can configure a backend that does nothing but get the content, stores it with the single purpose of pushing it forward. I would think even this is against Instagram's terms, and sounds a bit over-kill.
Is there any methods I've missed?
(The Bot asked for some code, here's the JS I cannot use..)
function viewInsta(input_url) {
var url = input_url;
const p = url.split("/");
var t = '';
for (let i = 0; i < p.length; i++) {
if(i==2){
t += p[i].replaceAll('-', '--').replaceAll('.','-')+atob('LnRyYW5zbGF0ZS5nb29n')+'/';
} else { if(i != p.length-1){ t += p[i]+'/'; } else { t += p[i]; } }
}
// document.getElementById(this.id).src = encodeURI(t);
return '<img src="'+encodeURI(t)+'">';
}
var request = new XMLHttpRequest();
request.open("GET", "instagram.json", false);
request.send(null)
var my_JSON_object = JSON.parse(request.responseText);
var node_objects = my_JSON_object.graphql.user.edge_owner_to_timeline_media.edges;
node_objects.forEach(alert_function);
function alert_function(value){
var url_array = value.node.thumbnail_src.split('?');
var url = url_array[0];
document.getElementById("div1").innerHTML += value.node.thumbnail_src + viewInsta(value.node.thumbnail_src) + '<hr>';
console.log (value.node.thumbnail_src);
}
You'll need to use the Facebook Graph API, specifically Instagram, and do the calls from your backend.
https://developers.facebook.com/docs/instagram-api/reference/ig-user/media#get-media
This means you'll need to do oAuth and store the access tokens on your backend.
You should be able to get the posts with POST /{ig-user-id}/media

Azure TTS neural voice audio file is created abnormally in 1 byte size

Azure TTS standard voice audio files are generated normally. However, for neural voice, the audio file is generated abnormally with the size of 1 byte. The code is below.
C# code
public static async Task SynthesizeAudioAsync()
{
var config = SpeechConfig.FromSubscription("xxxxxxxxxKey", "xxxxxxxRegion");
using var synthesizer = new SpeechSynthesizer(config, null);
var ssml = File.ReadAllText("C:/ssml.xml");
var result = await synthesizer.SpeakSsmlAsync(ssml);
​ using var stream = AudioDataStream.FromResult(result);
await stream.SaveToWaveFileAsync("C:/file.wav");
}
ssml.xml - The file below, set to standard voice, works fine.
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
<voice name="en-GB-George-Apollo">
When you're on the motorway, it's a good idea to use a sat-nav.
</voice>
</speak>
ssml.xml - However, the following file set for neural voice does not work, and an empty sound source file is created.
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
<voice name="en-US-AriaNeural">
When you're on the motorway, it's a good idea to use a sat-nav.
</voice>
</speak>
Looking at the behavior you have described due to some issues, the Speech service has returned no audio bytes.
I have checked the SSML file at my end it works completely fine i.e. there is no issues with the SSML.
As a next step to the solution, I would recommend you to add error handling code to give better picture of the error and take the action accordingly :
var config = SpeechConfig.FromSubscription("xxxxxxxxxKey", "xxxxxxxRegion");
using var synthesizer = new SpeechSynthesizer(config, null);
var ssml = File.ReadAllText("C:/ssml.xml");
var result = await synthesizer.SpeakSsmlAsync(ssml);
if (result.Reason == ResultReason.Canceled)
{
var cancellation = SpeechSynthesisCancellationDetails.FromResult(result);
Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
if (result.Reason == ResultReason.SynthesizingAudioCompleted)
{
Console.WriteLine ("No error ")
using var stream = AudioDataStream.FromResult(result);
await stream.SaveToWaveFileAsync("C:/file.wav");
}
else if (cancellation.Reason == CancellationReason.Error)
{
{
Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails=[{cancellation.ErrorDetails}]");
}
}
The above piece of modification will provide friendly error message on the console app.
Note : If you are not using the console app, you will have modify the code.
Sample output :
This is just a sample output. the error you might see would be different.

Unable to stream microphone audio to Google Speech to Text with NodeJS

I am going to develop a simple web based Speech to Text project. Develop with NodeJS, ws (WebSocket), and Google's Speech to Text API.
However, I have no luck to get the transcript from Google's Speech to Text API.
Below are my server side codes (server.js):
ws.on('message', function (message) {
if (typeof message === 'string') {
if(message == "connected") {
console.log(`Web browser connected postback.`);
}
}
else {
if (recognizeStream !== null) {
const buffer = new Int16Array(message, 0, Math.floor(message.byteLength / 2));
recognizeStream.write(buffer);
}
}
});
Below are my client side codes (ws.js):
function recorderProcess(e) {
var floatSamples = e.inputBuffer.getChannelData(0);
const ConversionFactor = 2 ** (16 - 1) - 1;
var floatSamples16 = Int16Array.from(floatSamples.map(n => n * ConversionFactor));
ws.send(floatSamples16);
}
function successCallback(stream) {
window.stream = stream;
var audioContext = window.AudioContext;
var context = new audioContext();
var audioInput = context.createMediaStreamSource(stream);
var recorder = context.createScriptProcessor(2048, 1, 1);
recorder.onaudioprocess = recorderProcess;
audioInput.connect(recorder);
recorder.connect(context.destination);
}
When I run the project, and open http://localhost/ in my browser, trying to speaking some sentences to the microphone. Unfortunately, there are no transcription returned, and no error messages returned in NodeJS console.
When I check the status in Google Cloud Console, it only display a 499 code in the dashboard.
Many thanks for helping!
I think the issue could be related to the stream process. Maybe some streaming process is stopped before the end of an operation. My suggestion is to review the callbacks in the JasvaScript code in order to find some “broken" promises.
Also, maybe its obvious but there is a different doc for audios than more than a minute:
https://cloud.google.com/speech-to-text/docs/async-recognize
CANCELLED - The operation was cancelled, typically by the caller.
HTTP Mapping: 499 Client Closed Request
Since the error message, this also could be related to the asynchronous and multithread features of node js.
Hope this works!

How can I tell if a web client is blocking advertisements?

What is the best way to record statistics on the number of visitors visiting my site that have set their browser to block ads?
Since programs like AdBlock actually never request the advert, you would have to look the server logs to see if the same user accessed a webpage but didn't access an advert. This is assuming the advert is on the same server.
If your adverts are on a separate server, then I would suggest it's impossible to do so.
The best way to stop users from blocking adverts, is to have inline text adverts which are generated by the server and dished up inside your html.
Add the user id to the request for the ad:
<img src="./ads/viagra.jpg?{user.id}"/>
that way you can check what ads are seen by which users.
You need to think about the different ways that ads are blocked. The first thing to look at is whether they are running noscript, so you could add a script that would check for that.
The next thing is to see if they are blocking flash, a small movie should do that.
If you look at the adblock site, there is some indication of how it does blocking:
How does element hiding work?
If you look further down that page, you will see that conventional chrome probing will not work, so you need to try and parse the altered DOM.
AdBlock forum says this is used to detect AdBlock. After some tweaking you could use this to gather some statistics.
setTimeout("detect_abp()", 10000);
var isFF = (navigator.userAgent.indexOf("Firefox") > -1) ? true : false,
hasABP = false;
function detect_abp() {
if(isFF) {
if(Components.interfaces.nsIAdblockPlus != undefined) {
hasABP = true;
} else {
var AbpImage = document.createElement("img");
AbpImage.id = "abp_detector";
AbpImage.src = "/textlink-ads.jpg";
AbpImage.style.width = "0";
AbpImage.style.height = "0";
AbpImage.style.top = "-1000px";
AbpImage.style.left = "-1000px";
document.body.appendChild(AbpImage);
hasABP = (document.getElementById("abp_detector").style.display == "none");
var e = document.getElementsByTagName("iframe");
for (var i = 0; i < e.length; i++) {
if(e[i].clientHeight == 0) {
hasABP = true;
}
}
if(hasABP == true) {
history.go(1);
location = "http://www.tweaktown.com/supportus.html";
window.location(location);
}
}
}
}
I suppose you could compare the ad prints with the page views on your website (which you can get from your analytics software).

Resources