node.js - use archiver where output is buffer - node.js

I want to zip a few readeableStreams into a writableStream.
the purpose is to do all in memory and not to create an actual zip file on disk.
for that i'm using archiver
let bufferOutput = Buffer.alloc(5000);
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(bufferOutput);
archive.append(someReadableStread, { name: test.txt});
archive.finalize();
I get an error on line archive.pipe(bufferOutput);.
This is the error: "dest.on is not a function"
what am i doing wrong?
Thx
UPDATE:
I'm running the following code for testing and the ZIP file is not created properly. what am I missing?
const fs = require('fs'),
archiver = require('archiver'),
streamBuffers = require('stream-buffers');
let outputStreamBuffer = new streamBuffers.WritableStreamBuffer({
initialSize: (1000 * 1024), // start at 1000 kilobytes.
incrementAmount: (1000 * 1024) // grow by 1000 kilobytes each time buffer overflows.
});
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(outputStreamBuffer);
archive.append("this is a test", { name: "test.txt"});
archive.finalize();
outputStreamBuffer.end();
fs.writeFile('output.zip', outputStreamBuffer.getContents(), function() { console.log('done!'); });

In your updated example, I think you are trying to get the contents before it has been written.
Hook into the finish event and get the contents then.
outputStreamBuffer.on('finish', () => {
// Do something with the contents here
outputStreamBuffer.getContents()
})

A Buffer is not a stream, you need something like https://www.npmjs.com/package/stream-buffers

As for why you are seeing garbage, this is because what you are seeing is the zipped data, which will seem like garbage.
To verify if the zipping has worked, you probably want to unzip it again and check if the output matches the input.

By adding the event listener on archiver works for me:
archive.on('finish', function() {
outputStreamBuffer.end();
// write your file
});

Related

Write multiple files to http response with streams in nodejs

I have an array of files that I have to pack into a gzip archive and send them through http response on the fly. That means I can't store the whole file in the memory yet I have to synchronously pipe them into tar.entry or everything is going to break.
const tar = require('tar-stream'); //lib for tar stream
const { createGzip } = require('zlib'); //lib for gzip stream
//large list of huge files.
const files = [ 'file1', 'file2', 'file3', ..., 'file99999' ];
...
//http request handler:
const pack = tar.pack(); //tar stream, creates .tar
const gzipStream = createGzip(); //gzip stream so we could reduce the size
//pipe archive data trough gzip stream
//and send it to the client on the fly
pack.pipe(gzipStream).pipe(response);
//The issue comes here, when I need to pass multiple files to pack.entry
files.forEach(name => {
const src = fs.createReadStream(name); //create stream from file
const size = fs.statSync(name).size; //determine it's size
const entry = pack.entry({ name, size }); //create tar entry
//and this ruins everything because if two different streams
//writes smth into entry, it'll fail and throw an error
src.pipe(entry);
});
Basically I need for the pipe to complete sending data (smth like await src.pipe(entry);), but pipes in nodejs don't do that. So is there any way I could get around it?
Nevermind, just don't use forEach in this case

Reading file using Node.js "Invalid Encoding" Error

I am creating an application with Node.js and I am trying to read a file called "datalog.txt." I use the "append" function to write to the file:
//Appends buffer data to a given file
function append(filename, buffer) {
let fd = fs.openSync(filename, 'a+');
fs.writeSync(fd, str2ab(buffer));
fs.closeSync(fd);
}
//Converts string to buffer
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
append("datalog.txt","12345");
This seems to work great. However, now I want to use fs.readFileSync to read from the file. I tried using this:
const data = fs.readFileSync('datalog.txt', 'utf16le');
I changed the encoding parameter to all of the encoding types listed in the Node documentation, but all of them resulted in this error:
TypeError: Argument at index 2 is invalid: Invalid encoding
All I want to be able to do is be able to read the data from "datalog.txt." Any help would be greatly appreciated!
NOTE: Once I can read the data of the file, I want to be able to get a list of all the lines of the file.
Encoding and type are an object:
const data = fs.readFileSync('datalog.txt', {encoding:'utf16le'});
Okay, after a few hours of troubleshooting a looking at the docs I figured out a way to do this.
try {
// get metadata on the file (we need the file size)
let fileData = fs.statSync("datalog.txt");
// create ArrayBuffer to hold the file contents
let dataBuffer = new ArrayBuffer(fileData["size"]);
// read the contents of the file into the ArrayBuffer
fs.readSync(fs.openSync("datalog.txt", 'r'), dataBuffer, 0, fileData["size"], 0);
// convert the ArrayBuffer into a string
let data = String.fromCharCode.apply(null, new Uint16Array(dataBuffer));
// split the contents into lines
let dataLines = data.split(/\r?\n/);
// print out each line
dataLines.forEach((line) => {
console.log(line);
});
} catch (err) {
console.error(err);
}
Hope it helps someone else with the same problem!
This works for me:
index.js
const fs = require('fs');
// Write
fs.writeFileSync('./customfile.txt', 'Content_For_Writing');
// Read
const file_content = fs.readFileSync('./customfile.txt', {encoding:'utf8'}).toString();
console.log(file_content);
node index.js
Output:
Content_For_Writing
Process finished with exit code 0

how to create a zip file in node given multiple downloadable links

I have a node application that contains several downloadable links (when you click on the link a pdf file is downloaded), and these links are dynamically created/populated. I want to implement a feature where we can somehow download all files from these links in one go. I presume for this I will somehow need to create a zip file from all these links - would anyone know how to go about this?
you could use the fs and archiver module:
var fs = require('fs');
var archiver = require('archiver');
var output = fs.createWriteStream('./example.zip');
var archive = archiver('zip', {
gzip: true,
zlib: { level: 9 } // Sets the compression level.
});
archive.on('error', function(err) {
throw err;
});
// pipe archive data to the output file
archive.pipe(output);
// append files
archive.file('/path/to/file0.txt', {name: 'file0-or-change-this-whatever.txt'});
archive.file('/path/to/README.md', {name: 'foobar.md'});
//
archive.finalize();

nodejs/fs: writing a tar to memory buffer

I need to be able to tar a directory, and send this to a remote endpoint via HTTP PUT.
I could of course create the tar, save it to disk, then read it again and send it.
But I'd rather like to create the tar, then pipe it to some buffer and send it immediately. I haven't been able to achieve this.
Code so far:
var tar = require('tar');
var fs = require("fs");
var path = "/home/me/uploaddir";
function getTar(path, cb) {
var buf = new Buffer('');
var wbuf = fs.createWriteStream(buf);
wbuf.on("finish", function() {
cb(buf);
});
tar.c({file:""},[path]).
pipe(wbuf);
}
getTar(path, function(tar) {
//send the tar over http
});
This code results in:
fs.js:575
binding.open(pathModule._makeLong(path),
^
TypeError: path must be a string
at TypeError (native)
at Object.fs.open (fs.js:575:11)
I've also tried using an array as buffer, no joy.
The following solution
creates the tar, then pipes it to some buffer and sends it immediately
and with great speed thanks to the tar-fs library:
First install the libraries request for simplified requests and tar-fs, which provides filesystem bindings for tar-stream: npm i -S tar-fs request
var tar = require('tar-fs')
var request = require('request')
var fs = require('fs')
// pack specific files in the directory
function packTar (folderName, pathsArr) {
return tar.pack(folderName, {
entries: pathsArr
})
}
// return put stream
function makePutReq (url) {
return request.put(url)
}
packTar('./testFolder', ['test.txt', 'test1.txt'])
.pipe(makePutReq('https://www.example.com/put'))
I have renamed the function names to be super verbose.

How to deal with missing file when adding N number of files to NodeJs Archiver

I'm adding a series of files to zip using Archiver, but in the case that a file is missing, say if was deleted or moved, it obviously causes a problem. But I'm not able to get around this problem.
My current code is like so:
for (var i=0; i<receivedIds.length; i++) {
var filePath = './public/pdf/letter-pdfs/'+receivedIds[i]+'.pdf';
console.log(filePath);
try {
pdfStream = fs.createReadStream(filePath);
archive.append(pdfStream, {name: receivedIds[i]+'.pdf'});
} catch(e) {
console.error(e);
}
}
Every time I try to wrap the append in a stream event like so:
pdfStream.on('readable', function() {
archive.append(pdfStream, {name: receivedIds[i]+'.pdf'});
});
The error is caught but the Archiver just outputs some empty file, even though some of the files do exist. How can I update this to append the files that do exist, and simply ignore the ones that don't?
Simply make sure that the file exists before reading it into the archive:
for (var i=0; i<receivedIds.length; i++) {
var filePath = './public/pdf/letter-pdfs/'+receivedIds[i]+'.pdf';
console.log(filePath);
try {
// will throw if it does not exist
// alternatively, use fs.statSync or an async version of the two
fs.accessSync(filePath);
pdfStream = fs.createReadStream(filePath);
archive.append(pdfStream, {name: receivedIds[i]+'.pdf'});
} catch(e) {
console.error(e);
}
}
This way the lack of a file is detected before the archiver attempts to read from an invalid source.

Resources