express-fileupload remove BOM from csv - node.js

I am trying to remove the BOM from a csv as express uploads it. To test this I upload a csv that I know it has the BOM. And I upload it normally using express-fileupload.
import fileUpload from 'express-fileupload'
import { resolve } from 'path'
import { writeFile, readFile } from 'fs/promises'
import stripBom from 'strip-bom'
const file = req.files.myfile
file.name = `keywords.csv`
const file_location = resolve('uploads', file.name)
await file.mv(file_location)
When moved to the uploads folder it mantains the BOM. To remove it I write the file after it's moved and use stripBom to remove it, it works but it feels really comvoluted.
...
await file.mv(file_location)
await writeFile(file_location, stripBom(csv))
Is there a way to remove the BOM as the file it's moved?

Related

extract 7zip files in Nodejs

I am trying to extract .7z files which is password protected.
In a particular folder path there are some .7z files format. First I have to extract all files in the same directory than I have to do another stuff with this files.
const path = require('path')
const fs = require('fs')
import { extractFull } from 'node-7z-forall';
const dirpath = path.join('C:/MyFolder/DATA')
fs.readdir(dirpath, function(err, files) {
const txtFiles = files.filter(el => path.extname(el) === '.7z')
console.log(txtFiles);
extractFull(txtFiles, 'C:/MyFolder/DATA', { p: 'admin123' } /* 7z options/switches */)
.progress(function (files) {
console.log('Some files are extracted: %s', files);
});
})
I am using node-7z-forall module but it is only working when I change the file format to .js to .mjs. in .mjs file format file extract smoothly .but in .js format it is not working.
error:
import { extractFull } from 'node-7z-forall';
^^^^^^
SyntaxError: Cannot use import statement outside a module
How to handle this error. Is it possible to work with in .js format instead of .mjs format?
I am new in nodejs. Please help!
the reason it errors, it that ".js" indicates a commonjs file which uses require() but a ".mjs" file indicates a module which uses the import syntax.
This is also where the error comes from because you try to use import in a non module.
You can prevent the error by simply importing the package using require():
const { extractFull } = require('node-7z-forall');

File download freezes and does not finish on linux

I'm using "Node.js", "express" and "SheetJS" so that an endpoint that saves the data (from an array of objects) in an XLSX file and returns the url of the file to be downloaded by another endpoint as a static file.
import crypto from 'crypto';
import * as XLSX from 'xlsx';
import path from 'path';
import * as fs from 'fs';
...
const exportToExcelFile = async (data) => {
...
const worksheet = XLSX.utils.json_to_sheet(data);
const workbook = XLSX.utils.book_new();
XLSX.utils.book_append_sheet(workbook, worksheet, 'Data');
const buf = XLSX.write(workbook, { bookType: 'xlsx', type: 'buffer' });
fs.writeFileSync(resolvedFilename, buf);
return `${process.env.APP_URL}/public/downloads/${date}/${filename}`;
}
In Windows, the file generation and download work perfectly, however, when the application is running on the linux server the file is generated, however, the download freezes and does not finish.
[Download congelado][1]
If I change the 'buffer' type to 'binary', the download works on windows and linux, however, in both when trying to open the file, Excel shows a corrupted file message.
const buf = XLSX.write(workbook, { bookType: 'xlsx', type: 'binary' });
Any ideas or suggestions of what it could be?
Does it help if you close the file after writing?
const fs = require("fs/promises");
(async function() {
var file = await fs.open(resolvedFilename, "w");
await file.write(buf);
await file.close();
})();
It works just fine, you can check your code live here: https://glitch.com/edit/#!/pentagonal-sepia-nutmeg
All I do is just copy/paste your code into glitch to see if it works. And it does.
So you should check your browser network tab, see if it reports any error. Also, take advantage of some tools such as curl with -v option to download the file, it will print all information about the download request you make

How to generate zip and put files into zip using stream in Node js

Currently, I tried to make zip file(or any format of compressed file) containing few files that I want to put into zip file.
I thought it would work with adm-zip module.
but I found out that the way adm-zip module put files into zip is buffer.
It takes a lot of memory when I put files that size is very huge.
In the result, My server stopped working.
Below is What I'd done.
var zip = new AdmZip();
zip.addLocalFile('../largeFile', 'dir1'); //put largeFile into /dir1 of zip
zip.addLocalFile('../largeFile2', 'dir1');
zip.addLocalFile('../largeFile3', 'dir1/dir2');
zip.writeZip(/*target file name*/ `./${threadId}.zip`);
Is there any solution to solve this situation?
to solve memory issue the best practice is to use streams and not load all files into memory for example
import {
createReadStream,
createWriteStream
} from 'fs'
import { createGzip } from 'zlib'
const [, , src, dest] = process.argv
const srcStream = createReadStream(src)
const gzipStream = createGzip()
const destStream = createWriteStream(dest)
srcStream
.pipe(gzipStream)
.pipe(destStream)

How to write tar stream entry with unknown size?

Here's the gist of what I'm trying to do:
import * as Path from 'path'
import {exportTableDataToFile} from '../struct'
import * as Tar from 'tar-stream'
import * as Zlib from 'zlib'
import * as FileSys from 'fs'
async function execute(opts, args) {
const pack = Tar.pack()
pack.pipe(Zlib.createGzip({level: Zlib.constants.Z_BEST_COMPRESSION})).pipe(FileSys.createWriteStream(opts.file))
const tblDataFile = Path.join(db.name, `${tblName}.csv`)
const dataStream = pack.entry({name: tblDataFile}, err => {
if(err) throw err;
})
await exportTableDataToFile(conn, db.name, tblName, dataStream)
}
Where exportTableDataToFile is writing a CSV into dataStream line-by-line.
Since I'm generating that CSV on the fly from some database records, I don't know how big it's going to be.
I also don't really want buffer the entire CSV into memory if I can help it.
The above is throwing "size mismatch" because I didn't specify the size in pack.entry(...)
Is there any way I can stream to a .tar.gz in Node.js without knowing the size?
Instead of using some module, if you just want to create a CSV from DB of an unknown size you can do the same as below:
const fs = require("fs");
const csvFile = fs.createWriteStream("db.csv");
//column headers of your csv, remove if not needed
csvFile.write("column1, column2, column3, column4");
while(true){
const result=db.find(table);//db call -> replace it with your db fetch call
//Here I am expecting column1Value ... to be the field in my DB
for(const elem of result){
csvFile.write(`${elem.column1Value}, ${elem.column2Value}, ${elem.column3Value}, ${elem.column4Value}`);
}
if(!result.length){
break
}
//Need to handle pagination
}
you can replace DB call as per your syntax.

Compressing text file containing new line characters in Node.js results in corrupt gzip file

Following the trivial code example below copied directly from https://nodejs.org/api/zlib.html#zlib_zlib, will result in a corrupt gzip file if the input text file contains newline characters!
When unzipping the resulting file from the terminal using unzip input.txt.gz I get the following error (unzipping by double clicking the file in Finder will yeild a similar error):
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
What am I missing? Surly you must be able to compress text files that contain newline characters?!
I use Mac OS 10.15.3 with node 12.14.1.
input.txt (tried to insert a trailing newline character, but it does not make a differance):
hello
world
Node.js code:
const { createGzip } = require('zlib');
const { pipeline } = require('stream');
const {
createReadStream,
createWriteStream
} = require('fs');
const gzip = createGzip();
const source = createReadStream('input.txt');
const destination = createWriteStream('input.txt.gz');
pipeline(source, gzip, destination, (err) => {
if (err) {
console.error('An error occurred:', err);
process.exitCode = 1;
}
});
Gzip is not ZIP. Gzip only compresses a single stream; ZIP is an archive format that packs multiple files into one archive, and each file may be compressed with a different method or not at all, too.
To decompress something you've compressed with Gzip, use the gunzip tool.

Resources