Cropping larger images to multiple sizes in node.js - node.js

I am creating an image uploading system (size usually >20MB<50MB) and i want to crop that images to various sizes (its for viewing mobile,web and desktop application),all images are stored into AWS s3.
Here is the snapshot of crop-sizes
[{
width:200,
height:200,
type:"small",
platform:"web"},
{
width:300,
height:400,
type:"small",
platform:"mobile-android"
}
....
....
]
Here is the think i am planned to do
1.First upload the image into S3.
2.Run all the crop operations in async task
upload:function(req,res){
//various cropsizes
var cropSizes = [];
//upload image to s3
uploadImageToS3(req.file,function(err,result){
if(!err){
//create crop
cropImage({
'cropsizes':cropSizes,
'file':req.file
},function(err,result){
console.log('all crop completed',result);
});
res.send('run crop in backgroud');
}
});
}
But is this correct method?? can anyone have better thing other than this???

Since you are already using s3 I would recommend trying aws lambda to resize your images and adding them back to s3 bucket with new sizes.
Here is detailed explanation in this link https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/

Related

Cloudinary upload pdf max width and height

Using Cloudinary, just like an image, I would like to limit the width and height of the pdfs that are uploaded.
This is how I upload the file:
const res = await new Promise((resolve, reject) =>
{
let cld_upload_stream = cloud.upload_stream(
{
folder: process.env.CLOUD_FOLDER,
},
function (err, res)
{
if (res)
{
resolve(res);
} else
{
reject(err);
}
}
);
streamifier.createReadStream(file.data).pipe(cld_upload_stream);
});
return {
url: res.url,
location: res.public_id
}
Are there any options to limit the width and height, that can work on pdf files?
I tried:
{ responsive_breakpoints:
{ create_derived: true,
bytes_step: 20000,
min_width: 200,
max_width: 1000 }}
but it deos not seem to work.
The responsive breakpoints feature you mentioned is related to analysing an image and deciding which sizes you should resize it to for a responsive design, balancing possible problems if you choose the sizes manually (which are that you may create 'too many' images with very similar sizes, or have large gaps between the byte sizes of the different sizes, so more bandwidth is used than necessary for often-requested files.)
There's a web interface here that uses that feature and provides examples of what it does:https://www.responsivebreakpoints.com/
This is not related to validating uploaded files or editing the original assets that you upload to Cloudinary. There's no server-side validation available related to the dimensions of an uploaded file, but you could either:
Use an Incoming Transformation to resize the asset before it's saved into your account: https://cloudinary.com/documentation/transformations_on_upload#incoming_transformations
Use the upload API response after the file is uploaded and validated, and if it's "too big", show an error to your user and delete the file again.
You could also use a webhooks notification to receive the uploaded file metadata: https://cloudinary.com/documentation/notifications

Nuxt convert image to webp before sending to AWS S3

How can I convert images to WEBP format before sending them to AWS S3 in my NUXT app?
I have a photo upload on my website, I would like to convert the images from the file input to WEBP format before uploading to the Amazon web service. Unlike NodeJS where I can import sharp and use it to convert the images to WEBP format, it is not the case here as I get an error like the like below
Failed to compile with 4 errors friendly-errors 01:16:19
These dependencies were not found: friendly-errors 01:16:19
friendly-errors 01:16:19
* child_process in ./node_modules/detect-libc/lib/detect-libc.js, ./node_modules/sharp/lib/libvips.js friendly-errors 01:16:19
* fs in ./node_modules/detect-libc/lib/detect-libc.js, ./node_modules/sharp/lib/libvips.js friendly-errors 01:16:19
friendly-errors 01:16:19
To install them, you can run: npm install --save child_process fs
and I would like to convert the images like in my code below
drop(e) {
e.preventDefault();
e.stopPropagation();
e.target.classList.remove('solid');
const files = e.dataTransfer.files;
this.handleFiles(files);
},
onFilePicked(e) {
e.preventDefault();
e.stopPropagation();
const files = e.target.files;
this.handleFiles(files);
},
saveToBackend(file, result) {
// compress image
// save to aws
},
readFiles(file) {
const reader = new FileReader();
reader.readAsDataURL(file)
reader.onload = () => {
const uploading = this.uploading
const result = reader.result
uploading.push(result)
this.uploading = uploading
// upload to aws
this.saveToBackend(file, result)
}
},
handleFiles(files) {
const Files = Array.from(files);
Files.forEach(file => {
// check if image is a valid image
if (file.type.includes('image')) {
// display the image
return this.readFiles(file)
}
return
});
console.log(Files, "loaded files");
},
and for the sharp plugin
import vue from "vue"
import sharp from "sharp"
vue.use(sharp)
please how can I compress the images?
you could use the packages imagemin and imagemin-webp as answered here: Convert Images to webp with Node
As I've explained you in your previous question, you cannot use a Node.js plugin into a client side app, especially when this one is already running and especially if you're hosting it as target: static on some Vercel or alike platform.
On top of this, image processing is pretty heavy in terms of required processing. So, having an external server that is doing this as a middleware is the best idea. You'll be able to make a load balancer, allocate auto-scaling, prevent a client side timeout and allow for a simpler way to debug things (maybe even more benefits actually).
You could maybe even do it on a serverless function, if you will not be bothered to much with slower cold starts.
TLDR:
simple and efficient solution, put a Node.js server in-between your Nuxt and your S3 bucket
more affordable one but more complex, call a serverless function for this (not even sure that this will be performant)
wait for Nuxt3 with Nitro, and make some shenigans with a local serviceWorker and Cloudflare workers, in edge-rendering (not even sure that this is the most adapted way of handling your issue neither)
maybe try to see for a not so expensive online service to handle the middleware for you
At the end, Image or Video is heavy and expensive to process. And doing those things require quite some knowledge too!
Eventually, I was able to solve my problem without using any package, and what I did was simply convert the image to a canvas and then I converted the image to WEBP format. Below is my solution.
convertImage(file) {
return new Promise((resolve) => {
// convert image
let src = URL.createObjectURL(file)
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d')
let userImage = new Image();
userImage.src = src
userImage.onload = function() {
canvas.width = userImage.width;
canvas.height = userImage.height;
ctx.drawImage(userImage, 0, 0);
let webpImage = canvas.toDataURL("image/webp");
return resolve(webpImage);
}
});
},
so, the function above first receives a file which is the image you want to convert from file input, then it converts the image into a canvas then converts the canvas back into an image, but this time you specify the format of the image you want to convert it into.
Since in my case, I wanted a webp image, I set canvas.toDataURL("image/webp") and by default, the quality of the WEBP image will be the same quality as the image that is received. if you want to reduce the quality to lower quality, the canvas.toDataURL("image/webp", 1) takes another argument which is a number between 0-1, 1 for the highest quality, and 0 lowest quality. you could set 0.5 for medium quality too, or whatever you want. You could also set other file formats you want through the first argument like canvas.toDataURL('image/jpeg', 1.0)-- for jpeg format or canvas.toDataURL('image/png', 1.0)--for png.
sources
the small channel where I found my solution - Where I found my solution
developer.mozilla.org explanation - more on the CanvasElement.toDataURL()

Getting EXIF data from images using ImageMagick inside AWS Lambda

I'm trying to extract EXIF data from images using ImageMagick inside AWS Lambda but I can't find the way to do it.
I have a piece of code to resize the image, it's working fine but I want to add the part to extract EXIF data.
Here is what I have right now to resize images:
var im = require("gm").subClass({imageMagick: true});
var operation = im(image.buffer).autoOrient().resize(width, height, '^');
operation.toBuffer(image.imageType, function(err, buffer) {
if (err) {
//do something with the error
} else {
//do something with the image
}
});
Any idea how to extract the metadata from the image ?
Thanks.
C.C.

How to get the thumbnail of base64 encoded video file in Nodejs?

I am developing a web application using Nodejs. I am using Amazon S3 bucket to store files. What I am doing now is that when I upload a video file (mp4) to the S3 bucket, I will get the thumbnail photo of the video file from the lambda function. For fetching the thumbnail photo of the video file, I am using this package - https://www.npmjs.com/package/ffmpeg. I tested the package locally on my laptop and it is working.
Here is my code tested on my laptop
var ffmpeg = require('ffmpeg');
module.exports.createVideoThumbnail = function(req, res)
{
try {
var process = new ffmpeg('public/lalaland.mp4');
process.then(function (video) {
video.fnExtractFrameToJPG('public', {
frame_rate : 1,
number : 5,
file_name : 'my_frame_%t_%s'
}, function (error, files) {
if (!error)
console.log('Frames: ' + files);
else
console.log(error)
});
}, function (err) {
console.log('Error: ' + err);
});
} catch (e) {
console.log(e.code);
console.log(e.msg);
}
res.json({ status : true , message: "Video thumbnail created." });
}
The above code works well. It gave me the thumbnail photos of the video file (mp4). Now, I am trying to use that code in the AWS lambda function. The issue is the above code is using video file path as the parameter to fetch the thumbnails. In the lambda function, I can only fetch the base 64 encoded format of the file. I can get id (s3 path) of the file, but I cannot use it as the parameter (file path) to fetch the thumbnails as my s3 bucket does not allow public access.
So, what I tried to do was that I tried to save the base 64 encoded video file locally in the lambda function project itself and then passed the file path as the parameter for fetching the thumbnails. But the issue was that AWS lamda function file system is read-only. So I cannot write any file to the file system. So what I am trying to do right now is to retrieve the thumbnails directly from the base 64 encoded video file. How can I do it?
Looks like you are using a wrong file location,
/tmp/* is your writable location for temporary files and limited to 512MB
Checkout the tutorial that does the same as you like to do.
https://concrete5.co.jp/blog/creating-video-thumbnails-aws-lambda-your-s3-bucket
Lambda Docs:
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Ephemeral disk capacity ("/tmp" space) 512 MB
Hope it helps.

Resize Image From S3 with Javascript

I am using the knox package to connect my S3 account and pull an image, like this:
var picturestring;
knoxclient.get(key).on('response', function(res){
console.log(res.statusCode);
console.log(res.headers);
res.setEncoding('base64');
res.on('data', function(chunk){
picturestring += chunk;
});
res.on('end', function () {
console.log(picturestring);
resizeimage(picturestring, done); // use the resize library in this function
});
}).end();
After that, I want to use a library that can take in that string (picturestring), resize the image, and return a new base64 string that represents the resized image. At this point, I plan on uploaded the resized image to S3.
I wrote a similar script in Golang that let me resize images like this, but every JS resizing library I've reviewed only give examples on resizing images from the local file system.
Is there any way that I can avoid reading the image from S3 into the file system, and focus on dealing with the returned string exclusively??
***************UPDATE***********************
function pullFromS3 (key, done) {
console.log("This is the key being pulled from Amazon: ", key);
var originalstream = new MemoryStream(null, {readable: false});
var picturefile;
client.get(key).on('response', function(res){
console.log("This is the res status code: ", res.statusCode);
res.setEncoding('base64');
res.pipe(originalstream);
res.on('end', function () {
resizeImage(originalstream, key, done);
});
}).end();
};
function resizeImage (originalstream, key, done) {
console.log("This is the original stream: ", originalstream.toString());
var resizedstream = new MemoryStream(null, {readable: false});
var resize = im().resize('160x160').quality(90);
// getting stuck here ******
originalstream.pipe(resize).pipe(resizedstream);
done();
};
I can't seem to get a grip on how the piping from originalstream --> to the resize ImageMagick function ---> to the resizestream works. Ideally, the resizestream should hold the base64 string for the resized image, which I can then upload to S3.
1) How do I wait for the piping to finish, and THEN use the data in resizedstream?
2) Am I doing the piping correctly? I can't debug it because I am unsure how to wait for the piping to finish!
I'm not using S3 but a local cloud provider in China to store images and their thumbnails. In my case I was using imagemagick library with imagemagick-stream and memorystream modules.
imagemagick-stream provides a way to process image with imagemagick through Stream so that I don't need to save the image in local disk.
memorystream provides a way to store the source image and thumbnail image binaries in memory, and with the ability to read/write to Stream.
So the logic I have is
1, Retrieve the image binaries from the client POST request.
2, Save the image into memory using memorystream
3, Upload it to, in your case, S3
4, Define the image process action in imagemagick-stream, for example resize to 180x180
5, Create a read stream from the original image binaries in step 1 using memorystream, pipe into imagemagick-stream created in step 4, and then pipe into a new memory writable created by memorystream where stores the thumbnail.
6, Upload the thumbnail I got in step 5 to S3.
The only problem in my solution is that, your virtual machine might run out of memory if many huge images came. But I know this should not be happened in my case so that's OK. But you'd better evaluate by yourself.
Hope this helps a bit.

Resources