adm-zip not adding all files - node.js

I'm noticing a strange behavior while using this library. I'm trying to compress multiple EML files, to do so I first convert them to buffers and add them to adm-zip instance using the addFile() method. Here's my code:
const zip = new AdmZip();
assetBodies.forEach((body) => {
// emlData to buffer
let emlBuffer = Buffer.from(body);
zip.addFile(`${new Date().getTime()}.eml`, emlBuffer);
});
zip.getEntries().forEach((entry) => {
console.log("entry name", entry.entryName);
});
const willSendthis = zip.toBuffer();
The problem is that sometimes it compresses all the files and sometimes it doesn't.
For example, I received 5 items in the assetBodies array, but when I log the entries of the zip file I only see 1 or 2, sometimes 5.
Am I missing something or there's an issue with the library?
EDIT:
It's worth mentioning that some of the files are quite large in terms of text so I wonder if that could be the issue

Related

Node.js fs.writeFile() not creating new files?

I need to create many .json files for the system i am trying to develop. To do this, i ran a for loop over the file names i needed, then used fs.writeFileSync('filename.json', [data]).
However, when trying to open these later, and when I try to find them in the project folder, I cannot find them anywhere.
I have tried writing in a name that was less complex and should have appeared in the same directory as my script, however that was fruitless as well. To my understanding, even if my file name wasn't what I expected it to be, I should get at least something, somewhere, however I end up with nothing changed.
My current code looks like this:
function addEmptyDeparture(date) {
fs.readFileSync(
__dirname + '/reservations/templates/wkend_dep_template.json',
(err, data) => {
if (err) throw err
fs.writeFileSync(
getDepartureFileName(date),
data
)
}
)
}
function getDepartureFileName(date){
return __dirname + '/reservations/' +
date.getFullYear() +
'/departing/' +
(date.getMonth() + 1).toString().padStart(2, "0") +
date.getDate().toString().padStart(2, "0") +
'.json'
}
Where data is the JSON object returned from fs.readFileSync() and is immediately written into fs.writeFileSync(). I don't think I need to stringify this, since it's already a JSON object, but I may be wrong.
The only reason I think it's not working at all (as opposed to simply not showing up in my project) is that, in a later part of the code, we have this:
fs.readFileSync(
getDepartureFileName(date)
)
.toString()
which is where I get an error for not having a file by that name.
It is also worth noting that date is a valid date object, as I was able to test that part in a fiddle.
Is there something I'm misunderstanding in the effects of fs.writeFile(), or is this not the best way to write .json files for use on a server?
You probably are forgetting to stringify the data:
fs.writeFileSync('x.json', JSON.stringify({id: 1}))
I have tried to create similar case using a demo with writeFileSync() creating different files and adding json data to these ,using a for loop. In my case it works . Each time a new file name is created . Here is my GitHub for the same :-
var fs = require('fs');
// Use readFileSync() method
// Store the result (return value) of this
// method in a variable named readMe
for(let i=0 ; i < 4 ; i++) {
var readMe = JSON.stringify({"data" : i});
fs.writeFileSync('writeMe' + i + '.json', readMe, "utf8");
}
// Store the content and read from
// readMe.txt to a file WriteMe.txt
Let me know if this what you have been trying at your end.

Jest - loading text files for string assertions

I am working on a text generator and I would like to compare the generated string with a text stored in a sample files. Files have indentation for some lines and it is very cumbersome to construct these strings in TS/js
Is there a simple way to load text from folder relative to current test or even project root in Jest?
Try this to import your txt into the jest file then compare with it:
const fs = require("fs");
const path = require("path");
const file = path.join(__dirname, "./", "bla.txt");
const fdr = fs.readFileSync(file, "utf8", function(err: any, data: any) {
return data;
});
expect(string).toBe(fdr)
Next to simply loading the text from a file as #avshalom showed in Jest you can also use snapshots to compare your generator output with files.
It's as simple as
it('renders correctly', () => {
const text = myGenerator.generate({...});
expect(text).toMatchSnapshot();
});
On first run the snapshot files will be written by Jest. (You then usually checkin those snapshots files) As far as i know you won't have much control over the location of the snapshot files or how to structure multiple files (other than splitting your tests across multiple test files).
If you want more control over how the files are stored and split, checkout jest-file-snapshot.

gunzip partials read from read-stream

I use Node.JS to fetch files from my S3 bucket.
The files over there are gzipped (gz).
I know that the contents of each file is composed by lines, where each line is a JSON of some record that failed to be put on Kinesis.
Each file consists of ~12K such records. and I would like to be able to process the records while the file is being downloaded.
If the file was not gzipped, that could be easily done using streams and readline module.
So, the only thing that stopping me from doing this is the gunzip process which, to my knowledge, needs to be executed on the whole file.
Is there any way of gunzipping a partial of a file?
Thanks.
EDIT 1: (bad example)
Trying what #Mark Adler suggested:
const fileStream = s3.getObject(params).createReadStream();
const lineReader = readline.createInterface({input: fileStream});
lineReader.on('line', line => {
const gunzipped = zlib.gunzipSync(line);
console.log(gunzipped);
})
I get the following error:
Error: incorrect header check
at Zlib._handle.onerror (zlib.js:363:17)
Yes. node.js has a complete interface to zlib, which allows you to decompress as much of a gzip file at a time as you like.
A working example that solves the above problem
The following solves the problem in the above code:
const fileStream = s3.getObject(params).createReadStream().pipe(zlib.createGunzip());
const lineReader = readline.createInterface({input: fileStream});
lineReader.on('line', gunzippedLine => {
console.log(gunzippedLine);
})

Get md5 checksums of entries in zip using adm-zip

I am trying to get MD5 checksums for all files in a ZIP file. I am currently using adm-zip for this because I read I can read zip contents into the memory without having to extract a file to the disk. But I am failing to read the data of entries in a ZIP file. My code goes as follows:
var zip = new AdmZip(path);
zip.getEntries()
.map(entry => { console.log(entry.entryName, entry.data); });
The entryName can be read, so opening and reading the zip works. But data is always undefined. I read that data is not really the method to read the data of an entry, but I am not sure how to actually read it.
To read the data of the entry, you must call the method getData() of the entry object, which returns a Buffer. Here is the updated code snippet which works on my end :
var zip = new AdmZip(path);
zip.getEntries().map(entry => {
const md5Hash = crypto.createHash('md5').update(entry.getData()).digest('hex');
console.log(md5Hash);
});
I used the basic crypto module to produce the md5 hash (in hex format). Don't forget to add it to the list of your requires at the top of your file: const crypto = require('crypto');

Creating multiple files from Vinyl stream with Through2

I've been trying to figure this out by myself, but had no success yet. I don't even know how to start researching for this (though I've tried some Google searchs already, to no avail), so I decided to ask this question here.
Is it possible to return multiple Vinyl files from a Through2 Object Stream?
My use case is this: I receive an HTML file via stream. I want to isolate two different sections of the files (using jQuery) and return them in two separate HTML files. I can do it with a single section (and a single resulting HTML file), but I have absolutely no idea on how I would do generate two different files.
Can anyone give me a hand here?
Thanks in advance.
The basic approach is something like this:
Create as many output files from your input file as you need using the clone() function.
Modify the .contents property of each file depending on what you want to do. Don't forget that this is a Buffer, not a String.
Modify the .path property of each file so your files don't overwrite each other. This is an absolute path so use something like path.parse() and path.join() to make things easier.
Call this.push() from within the through2 transform function for every file you have created.
Here's a quick example that splits a file test.txt into two equally large files test1.txt and test2.txt:
var gulp = require('gulp');
var through = require('through2').obj;
var path = require('path');
gulp.task('default', function () {
return gulp.src('test.txt')
.pipe(through(function(file, enc, cb) {
var c = file.contents.toString();
var f = path.parse(file.path);
var file1 = file.clone();
var file2 = file.clone();
file1.contents = new Buffer(c.substring(0, c.length / 2));
file2.contents = new Buffer(c.substring(c.length / 2));
file1.path = path.join(f.dir, f.name + '1' + f.ext);
file2.path = path.join(f.dir, f.name + '2' + f.ext);
this.push(file1);
this.push(file2);
cb();
}))
.pipe(gulp.dest('out'));
});

Resources