I'm trying to analyze a file I'll be uploading from react, I need to know if it can be uploaded based on several factors.
I found https://github.com/TooTallNate/node-wav
It works great on nodejs and I'm trying to use it on react. The sample creates a readable stream and pipes it to the wav reader.
var fs = require('fs');
var wav = require('wav');
var file = fs.createReadStream('track01.wav');
var reader = new wav.Reader();
// the "format" event gets emitted at the end of the WAVE header
reader.on('format', function (format) {
//Format of the file
console.log(format);
});
file.pipe(reader);
Using FilePond controller I'm able to get a base64 string of the file. But I can't figure out how to pass it to the reader
this is what I have so far on ReactJS:
var reader = new wav.Reader();
reader.on('format', function (format) {
//Format of file
console.log('format', format);
});
const buffer = new Buffer(base64String, 'base64')
const readable = new Readable()
readable._read = () => { }
readable.push(buffer)
readable.push(null)
readable.pipe(reader)
But I get Error: bad "chunk id": expected "RIFF" or "RIFX", got "u+Zj"
Since this file works on NodeJS with the same lib is obvious I'm doing something wrong.
EDIT:
this was a problem with my Base64 string, this method works if anyone needs to analyze a wav on the frontend
Related
At my endpoint in my NodeJS server, after retrieving an audio file stored as a Buffer in MongoDB, I want to represent it with a URL (much like how you do with URL.createObjectURL(blob) in the frontend on the browser). I then plan to res.render() the URL in HTML through Handlebars on the client, so that the user can click on it to download it:
<a href={{url}}>Click me to download the file!</a>
In the NodeJs server, I have converted the MongoDB Buffer into a JavaScript ArrayBuffer through:
var buffer = Buffer.from(recordingFiles[0].blobFile);
var arrayBuffer = Uint8Array.from(buffer).buffer;
I am unsure where to proceed from here. I seen solutions using fs or res.download(), but they don't seem applicable to my situation. Thanks in advance for any help!
Hopefully this can help.
var blob = new Blob(BUFFER, {type: "audio mime type"});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
var fileName = reportName;
link.download = fileName;
link.click();
Do you always need to preload the audio file onto the page?
If not, then I would advise you to add a separate endpoint to download the file on demand. The frontend link can send a get request to the endpoint and download the file only if the user clicked it.
Otherwise you'd always be downloading the buffer behind the scenes, even if the user didn't intend to download it. This is especially problematic on slow connections.
Frontend:
<a href={{`${baseUrl}/download/${audioId}`}}>Click me to download the file!</a>
Backend:
const stream = require('stream');
app.get('/download/:audioId', function (request, response) {
// Retrieve the tag from our URL path
const audioId = request.params.audioId;
const fileData; // TODO: Get file buffer from mongo.
const fileContents = Buffer.from(fileData, 'base64');
const readStream = new stream.PassThrough();
readStream.end(fileContents);
response.set('Content-disposition', 'attachment; filename=' + fileName);
response.set('Content-Type', '<your MIME type here>');
readStream.pipe(response);
});
A list of relevant MIME types can be found here.
I've set up a small node js BE app, built with express and fastCsv module on top of it. The desired outcome would be to be able to download a csv file to the client side, without storing it anywhere inside the server, since the data is generated depending on user criteria.
So far I've been able to get somewhere it it, Im using streams, since that csv file could be pretty large depending on the user selection. Im pretty sure something is missing inside the code bellow:
const fs = require('fs');
const fastCsv = require('fast-csv');
.....
(inside api request)
.....
router.get('/', async(req, res) => {
const gatheredData ...
const filename = 'sometest.csv'
res.writeHead(200, {
'Content-Type': 'text/csv',
'Content-Disposition': 'attachment; filename=' + filename
})
const csvDataStream = fastCsv.write(data, {headers: true}).pipe(res)
})
The above code 'works' in some way as it does deliver back the response, but not the actual file, but the contents of the csv file, which I can view in the preview tab as a response. To sum up, Im trying to stream in that data, into a csv and push it to download file to client, and not store it on the server. Any tips or pointers are very much appreciated.
Here's what worked for me after created a CSV file on the server using the fast-csv package. You need to specify the full, absolute directory path where the output CSV file was created:
const csv = require("fast-csv");
const csvDir = "abs/path/to/csv/dir";
const filename = "my-data.csv";
const csvOutput = `${csvDir}/${filename}`;
console.log(`csvOutput: ${csvOutput}`); // full path
/*
CREATE YOUR csvOutput FILE USING 'fast-csv' HERE
*/
res.type("text/csv");
res.header("Content-Disposition", `attachment; filename="${filename}"`);
res.header("Content-Type", "text/csv");
res.sendFile(filename, { root: csvDir });
You need to make sure to change the response content-type and headers to "text/csv", and try enclosing the filename=... part in double-quotes, like in the above example.
multiple zip File read and display in angularjs and those files convert base64 in nodejs and push into gitlab. please suggest me if it possible in nodejs. is there any blug available for reference.
use fs module of nodejs to read the files from directory
const testFolder = './tests/';
const fs = require('fs');
fs.readdirSync(testFolder).forEach(file => {
console.log(file);
});
once you get the files you can covert to base64
function base64_encode(file) {
// read binary data
var bitmap = fs.readFileSync(file);
// convert binary data to base64 encoded string
return new Buffer(bitmap).toString('base64');
}
My app needs to create a PDF file and then upload it to another server. The upload happens down the line via the post method from the request NPM package. Everything works fine if I pass in an fs.createReadStream:
const fs = require('fs');
const params = {file: fs.createReadStream('test.pdf')};
api.uploadFile(params);
Since PDFKit instantiates a read stream as well, I'm trying to pass that directly into the post params like this:
const PDFDocument = require('pdfkit');
const doc = new PDFDocument();
doc.text('steam test');
doc.end();
const params = {file: doc};
api.uploadFile(params);
However, this produces an error:
TypeError: Path must be a string. Received [Function]
If I look at PDFKit source code I see (in coffeescript):
class PDFDocument extends stream.Readable
I'm new to streams and it's clear I'm not understanding the difference here. To me if they are both readable streams, they should both be able to be passed in the same way.
I want to have persistent memory (store the user's progress) in a .json file in %AppData%. I tried doing this according to this post, but it doesn't work. For testing purposes I'm only working with storing one object.
The code below doesn't work at all. If I use fs.open(filePath, "w", function(err, data) { ... instead of readFile(..., it does create a json file in %AppData%, but then it doesn't write anything to it, it's always 0 bytes.
var nw = require('nw.gui');
var fs = require('fs');
var path = require('path');
var file = "userdata.json";
var filePath = path.join(nw.App.dataPath, file);
console.log(filePath); // <- This shows correct path in Application Data.
fs.readFile(filePath ,function (err, data) {
var idVar = "1";
var json = JSON.parse(data);
json.push("id :" + idVar);
fs.writeFile(filePath, JSON.stringify(json));
});
If anyone has any idea where I'm messing this up, I'd be grateful..
EDIT:
Solved, thanks to kailniris.
I was simply trying to parse an empty file
There is no json in the file you try to read. Before parsing data check if the file is empty. If it is then create an empty json, push the new data into it then write it to the file else parse the json in the file.