Connect Browser Microphone stream to Google Speech Api via node - node.js

I just started with node.js and am trying to connect the generated microphone stream from the browser with the google speech api running on my node server and the microphone-stream package.
I successfully packed the necessary modules with browserify, but now don't know how to proceed. I got the microphone stream to work on the node server as well (as explained here: Streaming Speech Recognition on an Audio Stream ).
How can I transmit the audiostream? I read about using websockets in one issue, but didn't really understand if it's the right way in my case. Or RPC?
For now I'm using these packages on the server:
const express = require('express');
const path = require('path');
const bodyParser = require('body-parser');
const fs = require('fs');
const record = require('node-record-lpcm16');
const google = require('googleapis');
const getUserMedia = require('get-user-media-promise');
const MicrophoneStream = require('microphone-stream');
This is my first time using node / a server, so hopefully this question isn't too naive.
Thanks! :)

I built a playground to tackle this task. It doesn't use any of the previous plugins (node record 16 / microphone-stream / ...) but sends a 16 bit audio stream to the node server via socket.io.
https://github.com/vin-ni/Google-Cloud-Speech-Node-Socket-Playground

Related

Diffie Hellman algorithm on browser

NodeJS have crypto module where DiffieHellman is a class. So, I can use this method to generate key and compute key.
But, client also need to create another instance of diffiehellman class. But how to do that? Can I use crypto module on client side? If yes then how, any solution? Here are my client side code...
const crypto = require('crypto');
const express = require('express');
const app = express();
// Generate server's keys...
const server = crypto.createDiffieHellman(139);
const serverKey = server.generateKeys();
//send p=prime and g=generator to the client
Node.js has own "crypto" module to use DiffieHellman algorithm, so you can watch it and write it on browser on your own.
Second way is take library ready for use (on github or else), e.g. this one.

Best Practice to create REST APIs using node.js

I am from .Net and C# background and I am new to Node.js. I am working on a project, which is mix of MongoDB and Node.JS.
In MongoDB, data from various tools is stored in different different collections. I have to create multiple REST APIs using Node.JS for CRUD operation on that data, these APIs will be called from React.JS application.
I want to keep APIs into separate files for seperate tool and then calling including all files into app.js file.
Please help me with best approach.
For POC purpose, I created a node.js application, where I created app.js file and written all my code for GET|POST|DELETE APIs. This is working fine.
var _expressPackage = require("express");
var _bodyParserPackage = require("body-parser");
var _sqlPackage = require("mssql");
var app = _expressPackage();
var cors = require("cors");
var auth = require('basic-auth');
var fs = require('fs');
const nodeMailer = require('nodemailer');
//Lets set up our local server now.
var server = app.listen(process.env.PORT || 4000, function () {
var port = server.address().port;
console.log("App now running on port", port);
});
app.get("/StudentList", function(_req ,_res){
console.log("Inside StudentList");
var Sqlquery = "select * from tbl_Host where HostId='1'";
GetQueryToExecuteInDatabase(_req,_res, Sqlquery,function(err,data){
console.log(data);
});
});
Don't know exactly what your app intends to do, but usually if you are not serving webpages and your API is not too complex, there is no need to use express. You can build a simple server natively in NodeJS to serve data.
Additionally, if your app has many routes (or is likely to in the future), it is a good idea to put helper functions like GetQueryToExecuteInDatabase() in a separate file outside of app.js such as utils.js.
Based on what I have understood about what you want to do, your file structure should look something like this:
data (db related files)
services (contains one file per api service)
app.js
utils.js
Hope this helps.

Using websocket-stream to upload file to Cloud Storage with Firebase Node.js?

I am struggling with finding a good way to upload bigger files from the web browser to Google Cloud Storage in a Firebase project. Now I want to try the websocket-stream package. However I do not understand how to set it up in Firebase. Is it possible?
From the documentation you should do this in Node.js:
var websocket = require('websocket-stream')
var wss = websocket.createServer({server: someHTTPServer}, handle)
Where this is assumed:
var someHTTPServer = http.createServer();
How do I get someHTTPServer when I use Firebase?

Networking websockets on node.js servers

So I've been coding in web design for two weeks now and I've devolved the core for my io game on node.js just by using localhost:3000 now I'm trying to implement what I have so far into an actual web-server. It's one heck of a learning curve, so say I set up a virtual-machine in Google Cloud Platforms running node.js, socket.io what do I even set my ports too?
This is my Code currently server side:
var express = require('express'); //adds express library
var app = express();
var server = app.listen(3000); //listens on port 3000
app.use(express.static('public')); //sends the public(client data)
console.log("Server Has Started");
var socket = require('socket.io'); //starts socket
var io = socket(server);
This is my Code currently client side:
var socket = io.connect("http://localhost:3000")
my website is gowar.io and it currently resides as a static file in googles "bucket". How do I hook up my websockets with something like a virtual machine?
Typically, cloud ecosystems will give you an endpoint for your storage or allow you to configure one.
Skim through Google's Docs about WebSockets to learn more about their recommended implementation of WebSockets.

node (socket) live audio stream / broadcast

Please, is there any easy way to stream (broadcast) media file (ogg, mp3, spx..) from server to client (browser) via NODE.js and possibly SOCKET.IO?
I have to record audio input on the server side and then be able to play it realtime for many clients.
I've been messing with binary.js or socket.io streams but wasnt able to get it right.
I've tried to encode audio input with speex, vorbis or lame and then load it by FS to client but I havent been successful. Or do i have to capture PCM and then decode it in browser?
Any suggestion on this, nothing Ive found ever helped me.
Many thanks for any tips, links and ideas.
You'll want to look for packages that work on Streams and from there it's just about piping your streams to output as necessary. Using Express or just the built-in HTTP you can accomplish this quite easily. Here's an example built around osx-audio which provides a PCM stream, lame which can encode a stream to mp3, and Express:
var Webcast = function(options) {
var lame = require('lame');
var audio = require('osx-audio');
var fs = require('fs');
// create the Encoder instance
var encoder = new lame.Encoder({
// input
channels: 2, // 2 channels (left and right)
bitDepth: 16, // 16-bit samples
sampleRate: 44100, // 44,100 Hz sample rate
// output
bitRate: options.bitrate,
outSampleRate: options.samplerate,
mode: (options.mono ? lame.MONO : lame.STEREO) // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
});
var input = new audio.Input();
input.pipe(encoder);
// set up an express app
var express = require('express')
var app = express()
app.get('/stream.mp3', function (req, res) {
res.set({
'Content-Type': 'audio/mpeg3',
'Transfer-Encoding': 'chunked'
});
encoder.pipe(res);
});
var server = app.listen(options.port);
}
module.exports = Webcast;
How you get your input stream might be the most interesting part, but that will depend on your implementation. The popular request package is built around Streams as well though, so it might just be an HTTP request away!
On the web browser you have the HTML5 video element and the audio element. Both of them have sources. Each web browser supports different codecs natively. So you'll want to watch out for that if you're trying to stream mp3.
You don't need socket.io, you only need HTTP. Your app is reading a file, music.ogg, and for each chunk it reads, it will send it through the http server. It will be one single HTTP request that's kept open until the file is transferred.
Here's how your html will look:
<audio src="http://example.com/music.ogg"></audio>
And your nodejs code will be something like this (haven't tested this):
var http = require('http');
var fs = require('fs');
http.on('request', function(request, response) {
var inputStream = fs.open('/path/to/music_file.ogg');
inputStream.pipe(response);
})
I'm only using the ReadableStream.pipe method on the inputStream and the http and fs modules for the above code. If you want to transcode the audio file (for example, from mp3 to ogg) you'll want to find a module that does that and pipe the data from the file into the transcoder then into response:
// using some magical transcoder
inputStream.pipe(transcoder).pipe(response);
The method will call end on the stream whenever it's finished writing so that the HTTP request will be finished as soon as the file is done being read (and transcoded).
You can do this with node and RTC. There is some tools ready to use like SimpleWebRTC or EasyRTC. For what I already tested video is still a trouble, but audio works great.

Resources