Node JS/azure functions passing video information back from api call - node.js

So essentially, what my api call does is it 1) takes in video data using parse multipart, 2) converts that video data to a real mp4 file using ffmpeg, and then 3) is supposed to send back the video data to the client in the response body.
Steps 1 and 2 work perfectly - it's that third step that I am stuck on.
The api call creates the Out.mp4 file, but when I try and read its info using createReadStream, the chunks array doesn't populate, and a null context.res body is returned.
Please let me know what I am doing wrong and how I can pass back the video info properly so as to be able to convert the video info back to a playable mp4 file on the client's side.
Also, lmk if you have any questions or things I can clarify.
Here is the api call index.js file
const fs = require("fs");
module.exports=async function(context, req){
try{
//Get the input file setup
context.log("Javascript HTTP trigger function processed a request.");
var bodyBuffer=Buffer.from(req.body);
var boundary=multipart.getBoundary(req.headers['content-type']);
var parts=multipart.Parse(bodyBuffer, boundary);
var temp = "C:/home/site/wwwroot/In.mp4";
fs.writeFileSync(temp, Buffer(parts[0].data));
//Actually execute the ffmpeg script
var execLineBuilder= "C:/home/site/wwwroot/ffmpeg-5.1.2-essentials_build/bin/ffmpeg.exe -i C:/home/site/wwwroot/In.mp4 C:/home/site/wwwroot/Out.mp4"
var execSync = require('child_process').execSync;
//Executing the script
execSync(execLineBuilder)
//EVERYTHING WORKS UP UNTIL HERE (chunks array seems to be empty, even though outputting chunk to a file populates
//That file with data)
//Storing the chunks of the output mp4 into chunks array
execSync.on('exit', ()=>{
chunks = [];
const myPromise = new Promise((resolve, reject) => {
var readStream = fs.createReadStream("C:/home/site/wwwroot/Out.mp4");
readStream.on('data', (chunk)=> {
chunks.push(chunk);
resolve("foo");
});
})
})
myPromise.then(()=>{
context.res={
status:200,
body:chunks
}
})
}catch (e){
context.res={
status:500,
body:e
}
}
}```

you can use an npm package called azure-function-express this package will basically convert your azure function to an express
This way you can directly read the mp3 file you saved and send it directly.
const createHandler = require("azure-function-express").createHandler;
const express = require("express");
const fs = require('fs');
const app = express();
app.get("/api/HttpTrigger1", (req, res) => {
res.writeHead(200, {'Content-Type': 'video/mp4'});
let open = fs.createReadStream('./test.mp3');
res.send(open);
});
This way you will be able to share the video also running the ffmpeg might also be simple

Related

how to send a file in response from express

i am currently using pdf-merge to make a duplicate of a file and then send it via express response
PDFMerge([file/appl.pdf], 'pdfappl/policy.pdf').then((mergedPdf) => {
res.setHeader('correlationid', correlationid);
res.contentType("application/pdf");
return res.send(mergedPdf);
})
how can i do it using node's fs module.
i got to copy the file using readStream and writeStream, but it is not passing into the response.
const fs = require('fs');
fs.createReadStream('file/appl.pdf').pipe(fs.createWriteStream('pdfappl/policy.pdf')); //copy works
res.setHeader('correlationid', correlationid);
res.contentType("application/pdf");
return res.send(???); // how to send the copied file here
You can stream the file data to the connection with pipe as it's being read
res.contentType("application/pdf");
var stream = fs.createReadStream(file_path);
stream.pipe(response);

Pipe data chunks as a Response to a clients terminal

so my question is on Node js piping. So my backend looks like this -- there is a simple route, the route calls function and passes to it a file path for an executable type file. This file is then run with the childProcess.spawn and there is a data output that I can console.log
const express = require("express");
const app = express();
etc...
const runExecutable = (executableFile) => {
const runFile = childProcess.spawn(executableFile);
runFile.stdout.on('data', function(data){
console.log("DATA", data);
})
runFile.on('exit', function(code, signal){
[some code here]
})
}
app.get('/example', (req, res) => {
var file = "./testFile.exe";
runExecutable(file);
})
The question I have is how can I pipe this output of data/a.k.a chunks in real time to the client, it's important for them to get the data as it comes out and not for me to write it to a file and send them the whole thing. One more thing to note, the client is accessing my route through a curl curl 123.45.678.901/example in their terminal and I want to pipe the data to their terminal.
On reading around, I know that for example the request module does a request.get(url).pipe(res) /[Express res] and so I'm wondering if this is similar to what I might need to be doing.
Thanks all!
Found the answer: Any stream can be piped - readable.pipe(destination[, options]) - childProcess.spawn(executableFile) is not a stream, but once the file starts being executed it does emit a "data" event which is another way of saying there is a stream being emitted from the running of the file. So if you are looking at these chunks of "data" coming out - like I am - like this:
runFile.stdout.on('data', function(data){
console.log("DATA", data);
})
then that's the stream that you use and that's the where you attach the pipe
Node documentation basically says - to the stream attach .pipe and then just send it to it's destination. Since I wanted to send these chunks of data to my client I also had to pass res around, so my code now looks like this:
const express = require("express");
const app = express();
etc...
const runExecutable = (executableFile, res) => {
const runFile = childProcess.spawn(executableFile);
runFile.stdout.on('data', function(data){
console.log("DATA", data);
}).pipe(res)
runFile.on('exit', function(code, signal){
[some code here]
})
}
app.get('/example', (req, res) => {
var file = "./testFile.exe";
runExecutable(file, res);
})
and it works! I hope this is helpful to others - Thanks for the help Lee!

How to save my cam stream in my server realtime node js?

how can I save my chunks of streams which converted into blobs in my node js server real-time
client.js | I am my cam stream as binary to my node js server
handleBlobs = async (blob) => {
let arrayBuffer = await new Response(blob).arrayBuffer()
let binary = new Uint8Array(arrayBuffer)
this.postBlob(binary)
};
postBlob = blob => {
axios.post('/api',{blob})
.then(res => {
console.log(res)
})
};
server.js
app.post('/api', (req, res) => {
console.log(req.body)
});
how can I store the incoming blobs or binary into one video file at the end of video recording completion.
This appears to be a duplicate of How to concat chunks of incoming binary into video (webm) file node js?, but it doesn't currently have an accepted answer. I'm copying my answer from that post into this one as well:
I was able to get this working by converting to base64 encoding on the front-end with the FileReader api. On the backend, create a new Buffer from the data chunk sent and write it to a file stream. Some key things with my code sample:
I'm using fetch because I didn't want to pull in axios.
When using fetch, you have to make sure you use bodyParser on the backend
I'm not sure how much data you're collecting in your chunks (i.e. the duration value passed to the start method on the MediaRecorder object), but you'll want to make sure your backend can handle the size of the data chunk coming in. I set mine really high to 50MB, but this may not be necessary.
I never close the write stream explicitly... you could potentially do this in your /final route. Otherwise, createWriteStream defaults to AutoClose, so the node process will do it automatically.
Full working example below:
Front End:
const mediaSource = new MediaSource();
mediaSource.addEventListener('sourceopen', handleSourceOpen, false);
let mediaRecorder;
let sourceBuffer;
function customRecordStream(stream) {
// should actually check to see if the given mimeType is supported on the browser here.
let options = { mimeType: 'video/webm;codecs=vp9' };
recorder = new MediaRecorder(window.stream, options);
recorder.ondataavailable = postBlob
recorder.start(INT_REC)
};
function postBlob(event){
if (event.data && event.data.size > 0) {
sendBlobAsBase64(event.data);
}
}
function handleSourceOpen(event) {
sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vp8"');
}
function sendBlobAsBase64(blob) {
const reader = new FileReader();
reader.addEventListener('load', () => {
const dataUrl = reader.result;
const base64EncodedData = dataUrl.split(',')[1];
console.log(base64EncodedData)
sendDataToBackend(base64EncodedData);
});
reader.readAsDataURL(blob);
};
function sendDataToBackend(base64EncodedData) {
const body = JSON.stringify({
data: base64EncodedData
});
fetch('/api', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body
}).then(res => {
return res.json()
}).then(json => console.log(json));
};
Back End:
const fs = require('fs');
const path = require('path');
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const server = require('http').createServer(app);
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json({ limit: "50MB", type:'application/json'}));
app.post('/api', (req, res) => {
try {
const { data } = req.body;
const dataBuffer = new Buffer(data, 'base64');
const fileStream = fs.createWriteStream('finalvideo.webm', {flags: 'a'});
fileStream.write(dataBuffer);
console.log(dataBuffer);
return res.json({gotit: true});
} catch (error) {
console.log(error);
return res.json({gotit: false});
}
});
Without attempting to implement this (Sorry no time right now), I would suggest the following:
Read into Node's Stream API, the express request object is an http.IncomingMessage, which is a Readable Stream. This can be piped in another stream based API. https://nodejs.org/api/stream.html#stream_api_for_stream_consumers
Read into Node's Filesystem API, it contains functions such as fs.createWriteStream that can handle the stream of chunks and append into a file, with a path of your choice. https://nodejs.org/api/fs.html#fs_class_fs_writestream
After completing the stream to file, as long as the filename has the correct extension, the file should be playable because the Buffer sent across the browser is just a binary stream. Further reading into Node's Buffer API will be worth your time.
https://nodejs.org/api/buffer.html#buffer_buffer

Upload file in Google Cloud Storage like a local file using Axios and Form-data

I'm using the axios and form-data npm packages to upload a local file.
Here's the basic code structure:
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
async function upload()
{
var path_str = '/path/to/file.pdf';
var form_obj = new FormData();
form_obj.append('my_file', fs.createReadStream(path_str));
var req_obj = {};
req_obj['url'] = 'https://post-url';
req_obj['method'] = 'post';
req_obj['data'] = form_obj;
return await axios(req_obj);
}
I would like to do the same thing for a file in Google Cloud Storage. In other words, instead of downloading the file from cloud storage to a local destination, and then using fs.createReadStream to access it, I'd prefer to do the equivalent of fs.createReadStream on the file while it is in cloud storage.
In the Google Cloud Storage npm package, File has a method called createReadStream, but that did not work when I plugged it into my code.
Is there a way to achieve this?
From the method createReadStream that you pointed out:
Create a readable stream to read the contents of the remote file. It can be piped to a writable stream or listened to for 'data' events to read a file's contents.
I understand that you are interested on the text marked in bold. Here is an example of how to do it. This code reads the file content without having to download it to your local system. The data object is instance of Buffer and it will depend on you how to handle it.
const {Storage} = require('#google-cloud/storage');
const fs = require('fs');
const storage = new Storage();
const bucket = storage.bucket('yourBucketName');
const remoteFile = bucket.file('yourFileName')
remoteFile.createReadStream()
.on('error', function(err) {})
.on('response', function(response) {
console.log('Server connected and responded with the specified status and headers.')
})
.on('data', function(data) {
console.log(data)
console.log(data.toString())
})
.on('end', function() {
console.log('End')
})

Parse Remote CSV File using Nodejs / Papa Parse?

I am currently working on parsing a remote csv product feed from a Node app and would like to use Papa Parse to do that (as I have had success with it in the browser in the past).
Papa Parse Github: https://github.com/mholt/PapaParse
My initial attempts and web searching haven't turned up exactly how this would be done. The Papa readme says that Papa Parse is now compatible with Node and as such Baby Parse (which used to serve some of the Node parsing functionality) has been depreciated.
Here's a link to the Node section of the docs for anyone stumbling on this issue in the future: https://github.com/mholt/PapaParse#papa-parse-for-node
From that doc paragraph it looks like Papa Parse in Node can parse a readable stream instead of a File. My question is;
Is there any way to utilize Readable Streams functionality to use Papa to download / parse a remote CSV in Node some what similar to how Papa in the browser uses XMLHttpRequest to accomplish that same goal?
For Future Visibility
For those searching on the topic (and to avoid repeating a similar question) attempting to utilize the remote file parsing functionality described here: http://papaparse.com/docs#remote-files will result in the following error in your console:
"Unhandled rejection ReferenceError: XMLHttpRequest is not defined"
I have opened an issue on the official repository and will update this Question as I learn more about the problems that need to be solved.
After lots of tinkering I finally got a working example of this using asynchronous streams and with no additional libraries (except fs/request). It works for remote and local files.
I needed to create a data stream, as well as a PapaParse stream (using papa.NODE_STREAM_INPUT as the first argument to papa.parse()), then pipe the data into the PapaParse stream. Event listeners need to be implemented for the data and finish events on the PapaParse stream. You can then use the parsed data inside your handler for the finish event.
See the example below:
const papa = require("papaparse");
const request = require("request");
const options = {/* options */};
const dataStream = request.get("https://example.com/myfile.csv");
const parseStream = papa.parse(papa.NODE_STREAM_INPUT, options);
dataStream.pipe(parseStream);
let data = [];
parseStream.on("data", chunk => {
data.push(chunk);
});
parseStream.on("finish", () => {
console.log(data);
console.log(data.length);
});
The data event for the parseStream happens to run once for each row in the CSV (though I'm not sure this behaviour is guaranteed). Hope this helps someone!
To use a local file instead of a remote file, you can do the same thing except the dataStream would be created using fs:
const dataStream = fs.createReadStream("./myfile.csv");
(You may want to use path.join and __dirname to specify a path relative to where the file is located rather than relative to where it was run)
OK, so I think I have an answer to this. But I guess only time will tell. Note that my file is .txt with tab delimiters.
var fs = require('fs');
var Papa = require('papaparse');
var file = './rawData/myfile.txt';
// When the file is a local file when need to convert to a file Obj.
// This step may not be necissary when uploading via UI
var content = fs.readFileSync(file, "utf8");
var rows;
Papa.parse(content, {
header: false,
delimiter: "\t",
complete: function(results) {
//console.log("Finished:", results.data);
rows = results.data;
}
});
Actually you could use a lightweight stream transformation library called scramjet - parsing CSV straight from http stream is one of my main examples. It also uses PapaParse to parse CSVs.
All you wrote above, with any transforms in between, can be done in just couple lines:
const {StringStream} = require("scramjet");
const request = require("request");
request.get("https://srv.example.com/main.csv") // fetch csv
.pipe(new StringStream()) // pass to stream
.CSVParse() // parse into objects
.consume(object => console.log("Row:", object)) // do whatever you like with the objects
.then(() => console.log("all done"))
In your own example you're saving the file to disk, which is not necessary even with PapaParse.
I am adding this answer (and will update it as I progress) in case anyone else is still looking into this.
It seems like previous users have ended up downloading the file first and then processing it. This SHOULD NOT be necessary since Papa Parse should be able to process a read stream and it should be possible to pipe 'http' GET to that stream.
Here is one instance of someone discussing what I am trying to do and falling back to downloading the file and then parsing it: https://forums.meteor.com/t/processing-large-csvs-in-meteor-js-with-papaparse/32705/4
Note: in the above Baby Parse is discussed, now that Papa Parse works with Node Baby Parse has been depreciated.
Download File Workaround
While downloading and then Parsing with Papa Parse is not an answer to my question, it is the only workaround I have as of now and someone else may want to use this methodology.
My code to download and then parse currently looks something like this:
// Papa Parse for parsing CSV Files
var Papa = require('papaparse');
// HTTP and FS to enable Papa parse to download remote CSVs via node streams.
var http = require('http');
var fs = require('fs');
var destinationFile = "yourdestination.csv";
var download = function(url, dest, cb) {
var file = fs.createWriteStream(dest);
var request = http.get(url, function(response) {
response.pipe(file);
file.on('finish', function() {
file.close(cb); // close() is async, call cb after close completes.
});
}).on('error', function(err) { // Handle errors
fs.unlink(dest); // Delete the file async. (But we don't check the result)
if (cb) cb(err.message);
});
};
download(feedURL, destinationFile, parseMe);
var parseMe = Papa.parse(destinationFile, {
header: true,
dynamicTyping: true,
step: function(row) {
console.log("Row:", row.data);
},
complete: function() {
console.log("All done!");
}
});
Http(s) actually has a readable stream as parameter in the callback, so here is a simple solution
try {
var streamHttp = await new Promise((resolve, reject) =>
https.get("https://example.com/yourcsv.csv", (res) => {
resolve(res);
})
);
} catch (e) {
console.log(e);
}
Papa.parse(streamHttp, config);
const Papa = require("papaparse");
const { StringStream } = require("scramjet");
const request = require("request");
const req = request
.get("https://example.com/yourcsv.csv")
.pipe(new StringStream());
Papa.parse(req, {
header: true,
complete: (result) => {
console.log(result);
},
});
David Liao's solution worked for me, I did tweak it a little bit since I am using local file. He did not include the example how to solve the file access in node if you did get Error: ENOENT: no such file or directory message in your console.
To test your actual working directory and to understand where you must point your path to console log the following, this gave me better understanding of the file location: console.log(process.cwd()).
const fs = require('fs');
const papa = require('papaparse');
const request = require('request');
const path = require('path');
const options = {
/* options */
};
const fileName = path.resolve(__dirname, 'ADD YOUR ABSOLUTE FILE LOCATION HERE');
const dataStream = fs.createReadStream(fileName);
const parseStream = papa.parse(papa.NODE_STREAM_INPUT, options);
dataStream.pipe(parseStream);
let data = [];
parseStream.on('data', chunk => {
data.push(chunk);
});
parseStream.on('finish', () => {
console.log(data);
console.log(data.length);
});

Resources