I want to use the following library to compress images
https://github.com/imagemin/imagemin
The problem is when the user uploads using a form, how do I plug in the file details from the form to the image min plugin? Like for example, if the file form field is call example-image, how do I plug that file form field to image min plugin so that it can compress the images?
I tried:
req is from the express/nodejs req
var filebus = new Busboy({ headers: req.headers }),
promises = [];
filebus.on('file', function(fieldname, file, filename, encoding, contentType) {
var fileBuffer = new Buffer(0),
s3ImgPath;
if (file) {
file.on('data', function (d) {
fileBuffer = Buffer.concat([fileBuffer, d]);
}).on('end', function () {
Imagemin.buffer(fileBuffer, {
plugins: [
imageminMozjpeg(),
imageminPngquant({quality: '80'})
]
}).then(function (data) {
console.log(data[0]);
if (s3ImgPath) {
promises.push(this.pushImageToS3(s3ImgPath, data[0].data, contentType));
}
});
}.bind(this));
}
});
But the problem is I rather have a buffer of the file that I can upload to S3. I don't want to come the files to a build/images folder. I want to get a buffer for the file, compress it, and upload that buffer to s3. How can I use image min to get a buffer of the file upload via html form and upload that to s3?
The documentation for the output parameter shows that it is optional (though admittedly, the function declaration did not, which might be confusing).
output
Type: string
Set the destination folder to where your files will be written. If no
destination is specified no files will be written.
Therefore, you can opt out of writing the files to storage and just use the output in memory:
imagemin([file.path], {
plugins: [
imageminMozjpeg(),
imageminPngquant({quality: '65-80'})
]
}).then(files => {
// upload file to S3
});
Related
I call an API which returns a buffer data of a .zip file. I want to read files' buffer data which resides inside the .zip file using its buffer data without saving the .zip file. Is it possible?
Try the zlib library (its a core node.js library - docs: https://nodejs.org/api/zlib.html#zlib), with this example I took from the documentation
const {unzip } = require('node:zlib');
const buffer = Buffer.from('eJzT0yMAAGTvBe8=', 'base64');
unzip(buffer, (err, buffer) => {
if (err) {
console.error('An error occurred:', err);
process.exitCode = 1;
}
console.log(buffer.toString());
});
I have some xlsx files stored in an s3 bucket that I am trying to update. To do this, I am creating an array of rows from a series of json files (one at a time). I get the rows from each json file as follows.
let worksheetRows = [];
for (let i = 0; i < json.length; i++) {
let data = json;
worksheetRows.push({ id: data[i]['id'], url: data[i].url.toString(), name: data[i].name })
}
I then download the existing xlsx from s3 and pass the stream (existingFileStream) to the function below.
async function loadWorkbookAndUpdate(existingFileStream, stream, bucket, worksheetRows, s3ExcelKey) {
const workbook = new ExcelJS.Workbook();
await workbook.xlsx.read(existingFileStream)
.then(async () => {
let worksheetUpdated = workbook.getWorksheet('My Sheet');
for (worksheetRow of worksheetRows) {
// Add rows in the headers
worksheetUpdated.addRow([
worksheetRow.id,
worksheetRow.url,
worksheetRow.name,
]);
}
})
await workbook.xlsx.write(stream)
.then(async () => {
await s3.upload({
Key: s3ExcelKey,
Bucket: bucket,
Body: stream,
ServerSideEncryption: "AES256",
ContentType: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
}).promise();
})
.catch(function (e) {
console.log(e.message, 'Sorry. The updated master list file (xlsx) for the default images could not be uploaded to s3.')
});
}
I then try to add the worksheetRows to the existing file stream and upload the updated file stream to s3 to be saved as an updated xlsx file. Ninety five percent of the time it works fine, but for a few of the json files I get this error. I verified that the json in the offending files was valid and the excel that the "existingStream" is generated from does not appear to be corrupt.
I am not using jszip, unless ExcelJS has some dependencies on it that I am not aware of...
Invoke Error {"errorType":"Error","errorMessage":"Corrupted zip or bug: unexpected signature (\x9B\x4C\xF3\x06, expected \x50\x4B\x03\x04)","stack":["Error: Corrupted zip or bug: unexpected signature (\x9B\x4C\xF3\x06, expected \x50\x4B\x03\x04)"," at ZipEntries.checkSignature (/var/task/node_modules/jszip/lib/zipEntries.js:28:19)"," at ZipEntries.readLocalFiles (/var/task/node_modules/jszip/lib/zipEntries.js:121:18)"," at ZipEntries.load (/var/task/node_modules/jszip/lib/zipEntries.js:258:14)"," at /var/task/node_modules/jszip/lib/load.js:48:24"," at processTicksAndRejections (internal/process/task_queues.js:97:5)"," at async XLSX.load (/var/task/node_modules/exceljs/lib/xlsx/xlsx.js:279:17)"," at async loadWorkbookAndUpdate (/var/task/app.js:176:2)","
Has anyone else run into this issue or have any ideas as to how I can debug this and figure out what's going on?
This happened to be some random discrepancies in property names in a few of my json files that were causing my code to try and add an empty array of rows to my worksheet. The error was a little confusing, because I didn't initially realize that ExcelJS had a dependency on jszip and the file I was populating the rows from was valid json. The code above did work once I found the issue in the property names.
I have a node application that contains several downloadable links (when you click on the link a pdf file is downloaded), and these links are dynamically created/populated. I want to implement a feature where we can somehow download all files from these links in one go. I presume for this I will somehow need to create a zip file from all these links - would anyone know how to go about this?
you could use the fs and archiver module:
var fs = require('fs');
var archiver = require('archiver');
var output = fs.createWriteStream('./example.zip');
var archive = archiver('zip', {
gzip: true,
zlib: { level: 9 } // Sets the compression level.
});
archive.on('error', function(err) {
throw err;
});
// pipe archive data to the output file
archive.pipe(output);
// append files
archive.file('/path/to/file0.txt', {name: 'file0-or-change-this-whatever.txt'});
archive.file('/path/to/README.md', {name: 'foobar.md'});
//
archive.finalize();
I want to be able to extract jpegs from a Uint8 array containing the data for a mpeg or avi video.
The module ffmpeg has the function fnExtractFrameToJPG but it only accepts a filename pointing to the video file. I want to be able to extract the frames from the UInt8Array.
One way to do it is to write the UInt8Array to a tmp file and then use the tmp file with ffmpeg to extract the frames:
const tmp = require("tmp");
const ffmpeg_ = require("ffmpeg");
function convert_images(video_bytes_array){
var tmpobj = tmp.fileSync({ postfix: '.avi' })
fs.writeFileSync(tmpobj.name, video_bytes_array);
try {
var process = new ffmpeg(tmpobj.name);
console.log(tmpobj.name)
process.then(function (video) {
// Callback mode
video.fnExtractFrameToJPG('./', { // make sure you defined the directory where you want to save the images
frame_rate : 1,
number : 10,
file_name : 'my_frame_%t_%s'
}, function (error, files) {
if (!error)
tmpobj.removeCallback();
});
});
} catch (e) {
console.log(e.code);
console.log(e.msg);
}
}
Another possibitlity is to use opencv after you save the UInt8Array to a tmp file. Another solution is to use stream and ffmpeg-fluent which would not require using tmp files.
How we can check uploaded file extension in sails js?
I tried on skipper and multer but have no result.
any suggestion?
You should use saveAs options for each file before saving.
var md5 = require('md5');
module.exports = {
testUpload:function(req,res){
// setting allowed file types
var allowedTypes = ['image/jpeg', 'image/png'];
// skipper default upload directory .tmp/uploads/
var allowedDir = "../../assets/images";
// don not define dirname , use default path
req.file("uploadFiles").upload({
saveAs:function(file, cb) {
var d = new Date();
var extension = file.filename.split('.').pop();
// generating unique filename with extension
var uuid=md5(d.getMilliseconds())+"."+ extension;
// seperate allowed and disallowed file types
if(allowedTypes.indexOf(file.headers['content-type']) === -1) {
// save as disallowed files default upload path
cb(null,uuid);
}else{
// save as allowed files
cb(null,allowedDir+"/"+uuid);
}
}
},function whenDone(err,files){
return res.json({
files:files,
err:err
});
});
}
}
Just get uploaded files array and check last chunk of string after dot.
req.file('file').upload({
maxBytes: 2000000,
dirname: 'uploadFolder'
}, function (error, files) {
if (error) return sails.log.error(error);
// You have files array, so you can do this
files[0].fd.split('.').pop(); // You get extension
}
What is going on here? When upload is finished you will get array of files with their filenames. You can get data from that array and see where this file is located (full path).
The last thing is splitting string by dots and get last item from the array with pop() method.