Node JS radio (audio stream) - node.js

Good evening! I just try to do my own internet radio, using Node JS. I saw a lot of questions and tutorials about it, but all of them are old and hard to understand.
const fs = require('fs'),
files = fs.readdirSync('./music'), // I have some mp3 in music folder
clips = [],
dhh = fs.createWriteStream('./tracks.mp3'); // create output stream
let stream, currentfile;
files.forEach(file => {
clips.push(file);
});
// recursive function
const main = () => {
if (!clips.length) {
dhh.end('Done');
return;
}
currentfile = './music/' + clips.shift();
stream = fs.createReadStream(currentfile);
stream.pipe(dhh, { end: false });
stream.on('end', function() {
console.log(currentfile + ' appended');
main();
});
};
main();
So I have all my mp3 in one single file. What can I do to stream this file to lot of different users, when they are connected. Lot of answers about it advice use BinaryJS, but it was written about 5 years ago.
I just do not how to start, so I need your help. Thank you!
I tried something like this
const http = require('http'),
fs = require('fs'),
filePath = './music/my.mp3',
stat = fs.statSync(filePath);
http.createServer(function(request, response) {
fs.createReadStream(filePath).pipe(response);
response.writeHead(200, {
'Content-Type': 'audio/mpeg',
'Content-Length': stat.size,
});
}).listen(4000);
When user connect to port 4000, music starts playing, but it is not online stuff, when another user connect music also playing from beginning, but I want to do it like online radio. But it doesn't work :(

I'm also tryin to do the same thing ur doing, and found a very informative blog article here. There's one just one thing in it, it doesn't explain how to add writables/ consumers to the server, so it is incomplete. But you can create 90% of your radio with the rest. Also I used a different approach so I'd like to share it here to help another soul. ('writable1' is not defined)Feel free to correct the code.
const express = require('express'),
router = express.Router(),
fs = require('fs');
Throttle = require('throttle'),
ffprobe = require('ffprobe'),
ffprobeStatic = require('ffprobe-static');
// Play all songs from db
router.get('/', function(req, res){
const bitRate = ffprobe('myAudio.mp3', { path: ffprobeStatic.path }, function(err, info){
console.log(info.streams[0].bit_rate);
});
const readable = fs.createReadStream('myAudio.mp3');
const throttle = new Throttle(bitRate / 8);
const writables = [writable1, writable2, writable3];
readable.pipe(throttle).on('data', (chunk) => {
for (const writable of writables) {
writable.write(chunk);
}
});
});
module.exports = router;

Related

generate excel via nodeJS in Azure Functions

I have nodejs app with expressJS and excel4node library, which is running on local machine.
I'm sending REST messages to this server and it returns me excel binary file.
I want to move it Azure Functions, but facing with issue. Even simple app (took from example) is not running there. Maybe someone have suggestions how to solve this?
const createHandler = require('azure-function-express').createHandler;
const express = require('express');
const xl = require('excel4node')
// Create express app as usual
const app = express();
app.post('/api/hello-world', (req, res) => {
var wb = new xl.Workbook();
var ws = wb.addWorksheet('S');
ws.cell(1, 1).string('A');
wb.write(`FileName.xlsx`, res);
});
// Binds the express app to an Azure Function handler
module.exports = createHandler(app);
and this is the error what I'm seeing :
Microsoft.AspNetCore.Server.Kestrel.Core: Response Content-Length mismatch: too many bytes written (3790 of 3569).
Does someone know how to solve it, or maybe have an example of generating excel in Azure Functions via NodeJS
Just in case anyone else stubles upon this looking for the answer (like I did). This works for me:
var xl = require('excel4node');
const tryCreate = async (obj) => {
let wb = new xl.Workbook();
const buffer = await wb.writeToBuffer();
return {
setEncoding: 'binary',
// status: 200, /* Defaults to 200 */
body: buffer
};
}
module.exports = async function (context, req) {
try {
context.res = await tryCreate(req.body);
} catch (error) {
context.log.error(error, new Date().toISOString());
}
}

No audio in above 360p video downloaded from ytdl-core express?

I have made an api which is downloading videos from the link of youtube link but I'm not enable to download the video with its audio above 360p format. It is downloading only video and there is no audio.
Is there any solution to this ?
Typically 1080p or better video does not have audio encoded with it. The audio must be downloaded separately and merged via an appropriate encoding library. ffmpeg is the most widely used tool, with many Node.js modules available. Use the format objects returned from ytdl.getInfo to download specific streams to combine to fit your needs. Look at https://github.com/fent/node-ytdl-core/blob/master/example/ffmpeg.js for an example on doing this.
You can specific the video quality
const video = ytdl('http://www.youtube.com/watch?v=e_RsG3HPpA0',{quality: 18});
video.on('progress', function(info) {
console.log('Download progress')
})
video.on('end', function(info) {
console.log('Download finish')
})
video.pipe(fs.createWriteStream('video.mp4'));
Please check the option of quality value by this link
If you are using ytdl-core then the problem is with the itags and the availability of the itag you need.There are 3 itags only that support both video and audio and rest itags only support either audio or just video. For video and audio in ytdl-core you need to specifically check if the URl supports 720p or 1080p or not. I created two functions that can help you alot. what you can do is that simple send an xmlhttprequest from index.html and wait for link response so that you or your user can download from that link. "fup90" means false url provided and inc90 denotes incorrect URL so that you can handle the error if the URL is not a youtube URL. The code is shown below and note that you send an xmlhttpreqest using post method and with data in json string in this format var data = {downloadType:"audio"/"video",quality:"required itag",url:"youtube video url"}.
const ytdl = require('ytdl-core');
const express = require('express');
const parser = require('body-parser');
const app = express();
app.use(parser.text());
app.use(express.static(__dirname + "\\static"));
// video formats 18 - 360p and 22 - 720p
// audio
async function getAudioData(videoURL) {
let videoid = await ytdl.getURLVideoID(videoURL);
let info = await ytdl.getInfo(videoid);
// let format = ytdl.chooseFormat(info.formats, { quality: '134' }); for video
// var format = ytdl.filterFormats(info.formats, 'videoandaudio');
let format = ytdl.filterFormats(info.formats, 'audioonly');
let required_url = 0;
format.forEach(element => {
if (element.mimeType == `audio/mp4; codecs="mp4a.40.2"`) {
required_url = element.url;
}
});
return required_url;
}
async function getVideoData(videoURL, qualityCode) {
try {
let videoid = ytdl.getURLVideoID(videoURL);
let info = await ytdl.getInfo(videoid);
var ifExists = true;
if (qualityCode == 22) {
info.formats.forEach(element => {
if (element.itag == 22) {
qualityCode = 22;
} else {
qualityCode = 18;
ifExists = false;
}
});
}
let format = ytdl.chooseFormat(info.formats, { quality: qualityCode });
var answers = {
url: format.url,
exists: ifExists
}
} catch (e) {
var answers = {
url: "fup90",
exists: false
}
}
return answers;
}
app.get("/", (req, res) => {
res.sendFile(__dirname + "\\index.html");
});
app.post("/getdownload", async(req, res) => {
let data = JSON.parse(req.body);
if (data.downloadType === "video") {
var answer = await getVideoData(data.url, data.quality);
if (answer.url === "fup90") {
res.send("inc90");
} else {
res.send(answer.exists ? answer.url : "xformat");
}
} else if (data.downloadType === "audio") {
var audioLink = await getAudioData(data.url);
res.send(audioLink);
} else {
res.send("error from server");
}
});
app.listen(8000, () => {
console.log("server started at http://localhost:8000");
});

How can I create a single read stream from two files in Node?

I'm pretty new to NodeJS streams and fileStreams. I'm trying to parse two XML files using SAX. I've succeeded in getting it to work for a single file:
const fs = require('fs');
const sax = require("sax");
const saxStream = sax.createStream(IS_STRICT, OPTIONS);
saxStream.on("error", function (e) { ... });
...
const out = fs.createReadStream(INFILE).pipe(saxStream);
How can I pipe two files into SAX?
Update
I'm trying to put the output of SAX into a single file. Here's the SAX I'm using, which is an XML parser that uses streams:
https://www.npmjs.com/package/sax
Well, I tried coding a way to do this, but the SAX parser only accepts one root node in the XML input so once it's done with the first file, it ignores all the XML in a 2nd XML you feed it because it's outside the root node.
So, if you want to parse the 2nd file, it looks like you need to create a 2nd sax.createStream() and feed it the 2nd file. As always, we could offer you a more complete suggestion if you showed us what you're actually trying to do with the parsed XML input.
FYI, here's what I tried:
const fs = require('fs');
const sax = require("sax");
const saxStream = sax.createStream(false, {trim: true, normalize: true});
saxStream.on("error", e => {
console.log("saxStream error", e);
});
saxStream.on("opentag", node => {
console.log(node);
});
saxStream.on("end", () => {
console.log("done with saxStream");
})
let stream1 = fs.createReadStream("./sample1.xml");
stream1.pipe(saxStream, {end: false});
stream1.on("end", () => {
console.log("starting stream2")
fs.createReadStream("./sample2.xml").pipe(saxStream, {end: true});
});
I stepped through the parser in the debugger and the 2nd file's input is successfully fed into the SAX parser, it just ignores it because it's outside of the root node.
There are several places in this file where it checks for parser.closedRoot and if so, it skips the content.
Actually, I did get it to work by adding a fake root tag that could enclose both sets of XML. I have no idea if this is what you want, but you can examine this for educational purposes:
const fs = require('fs');
const sax = require("sax");
const saxStream = sax.createStream(false, {trim: true, normalize: true});
saxStream.on("error", e => {
console.log("saxStream error", e);
});
saxStream.on("opentag", node => {
console.log(node);
});
saxStream.on("end", () => {
console.log("done with saxStream");
})
let stream1 = fs.createReadStream("./sample1.xml");
let stream2 = fs.createReadStream("./sample2.xml");
saxStream.write("<fakeTop>");
stream1.pipe(saxStream, {end: false});
stream1.on("end", () => {
console.log("starting stream2")
stream2.pipe(saxStream, {end: false});
stream2.on("end", () => {
saxStream.write("</fakeTop>");
saxStream.end();
});
});

Google speech recognition api is too slow

Actually I am trying to store audio stream from my web page to my nodejs server using socket.io. And after storing it on my server I am trying to perform speech recognition on the stored file. I have following code running well but it is too slow. I have all environment variables and configurations in place. After collecting statistics for many requests the response time is varying between 7 seconds to 18 seconds.
var http = require('http');
var wav = require('wav');
var app = http.createServer(function ejecute(request, response) {});
var io = require('socket.io').listen(app);
var fs = require('fs');
var Speech = require('#google-cloud/speech');
io.on('connection', function(socket) {
var fileWriter = null;
socket.on('stream', function(data) {
if (!fileWriter) {
fileWriter = new wav.FileWriter('demo.wav', {
channels: 1,
sampleRate: 16000,
bitDepth: 16
});
}
if (!fileWriter._writableState.ended)
fileWriter.write(data);
});
socket.on('end', function(data) {
fileWriter.end();
streamingRecognize('demo.wav');
});
});
function streamingRecognize(filename) {
const speech = Speech();
const request = {
encoding: 'LINEAR16',
languageCode: 'en-US',
sampleRateHertz: 16000
};
speech.recognize(filename, request)
.then((results) => {
const transcription = results[0];
console.log(`Transcription: ${transcription}`);
})
.catch((err) => {
console.error('ERROR:', err);
});
}
app.listen(3000);
Can anyone help me out here? What wrong am I doing?
Here is reference I am using
https://cloud.google.com/speech/docs/how-to
I can use Web Speech recognizer too. But I need to provide cross browser support.

Get PDFKit as base64 string

I'm searching a way to get the base64 string representation of a PDFKit document. I cant' find the right way to do it...
Something like this would be extremely convenient.
var doc = new PDFDocument();
doc.addPage();
doc.outputBase64(function (err, pdfAsText) {
console.log('Base64 PDF representation', pdfAsText);
});
I already tried with blob-stream lib, but it doesn't work on a node server (It says that Blob doesn't exist).
Thanks for your help!
I was in a similar predicament, wanting to generate PDF on the fly without having temporary files lying around. My context is a NodeJS API layer (using Express) which is interacted with via a React frontend.
Ironically, a similar discussion for Meteor helped me get to where I needed. Based on that, my solution resembles:
const PDFDocument = require('pdfkit');
const { Base64Encode } = require('base64-stream');
// ...
var doc = new PDFDocument();
// write to PDF
var finalString = ''; // contains the base64 string
var stream = doc.pipe(new Base64Encode());
doc.end(); // will trigger the stream to end
stream.on('data', function(chunk) {
finalString += chunk;
});
stream.on('end', function() {
// the stream is at its end, so push the resulting base64 string to the response
res.json(finalString);
});
Synchronous option not (yet) present in the documentation
const doc = new PDFDocument();
doc.text("Sample text", 100, 100);
doc.end();
const data = doc.read();
console.log(data.toString("base64"));
I just made a module for this you could probably use. js-base64-file
const Base64File=require('js-base64-file');
const b64PDF=new Base64File;
const file='yourPDF.pdf';
const path=`${__dirname}/path/to/pdf/`;
const doc = new PDFDocument();
doc.addPage();
//save you PDF using the filename and path
//this will load and convert
const data=b64PDF.loadSync(path,file);
console.log('Base64 PDF representation', pdfAsText);
//you could also save a copy as base 64 if you wanted like so :
b64PDF.save(data,path,`copy-b64-${file}`);
It's a new module so my documentation isn't complete yet, but there is also an async method.
//this will load and convert if needed asynchriouniously
b64PDF.load(
path,
file,
function(err,base64){
if(err){
//handle error here
process.exit(1);
}
console.log('ASYNC: you could send this PDF via ws or http to the browser now\n');
//or as above save it here
b64PDF.save(base64,path,`copy-async-${file}`);
}
);
I suppose I could add in a convert from memory method too. If this doesn't suit your needs you could submit a request on the base64 file repo
Following Grant's answer, here is an alternative without using node response but a promise (to ease the call outside of a router):
const PDFDocument = require('pdfkit');
const {Base64Encode} = require('base64-stream');
const toBase64 = doc => {
return new Promise((resolve, reject) => {
try {
const stream = doc.pipe(new Base64Encode());
let base64Value = '';
stream.on('data', chunk => {
base64Value += chunk;
});
stream.on('end', () => {
resolve(base64Value);
});
} catch (e) {
reject(e);
}
});
};
The callee should use doc.end() before or after calling this async method.

Resources