When using the sharp image resize library https://github.com/lovell/sharp for node.js, the image is being rotated.
I have no code thats says .rotate(), so why is it being rotated and how can I stop it from rotating?
I'm using the serverless-image-resizing example provided by AWS: https://github.com/awslabs/serverless-image-resizing that uses lambda to resize images on the fly if the thumbnail does not exist
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.toFormat('png')
.toBuffer()
)
.then(buffer => S3.putObject({
Body: buffer,
Bucket: BUCKET,
ContentType: 'image/png',
Key: key,
}).promise()
)
.then(() => callback(null, {
statusCode: '301',
headers: {'location': `${URL}/${key}`},
body: '',
})
)
.catch(err => callback(err))
Original large image:
Resized image: note it has been rotated as well:
The problem actually turned out to be this: when you resize an image, the exif data is lost. The exif data includes the correct orientation of the image, ie which way is up.
Fortunately sharp does have a feature that does retain the exif data, .withMetadata(). So the code above needs to be changed to read:
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.withMetadata() // add this line here
.toBuffer()
)
(Note that you also need to remove the .toFormat('png') call because png does not have the same support for exif that jpeg does)
And now it works properly, and the resized image is the correct way up.
The alternative solution is to actually call .rotate() before resize. This will auto-orient the image based on the EXIF data.
.then(data => Sharp(data.Body)
.rotate()
.resize(width, height)
.toBuffer()
)
More details in the docs.
This way you don't need to retain the original metadata, keeping the overall image size smaller.
const data = await sharp(file_local)
.resize({
width: px,
})
.jpeg({
quality: quality,
progressive: true,
chromaSubsampling: "4:4:4",
})
.withMetadata()
.toFile(file_local_thumb);
Using the (.withMetadata()), to prevent rotation of image.
Aditional you can pass the width only parameter, you dont need the height.
I fixed this in a related way, specific to the AWS Serverless Image Handler, without changing code. I'm passing in "rotate":null in the list of edits.
In reading the latest (5.2.0) code it looks like they tried to fix this, but it still wasn't working for me until I added "rotate":null
Here is a related issue on Github: https://github.com/awslabs/serverless-image-handler/issues/236
Updated answer for Serverless Image Handler 5.0, deploying with the CloudFormation Stack template as of 10/2020:
I appended .rotate() to line 50 of image-handler.js and it worked like a charm:
const image = sharp(originalImage, { failOnError: false }).rotate();
In case you landed here using the Nuxt Image component with IPX provider, this is how I solved it: in the nuxt.config.js file, add this:
buildModules: [
[
'#nuxt/image',
{
sharp: {
withMetadata: true,
},
},
],
],
Note there are more options in the module than the ones that are documented: https://github.com/nuxt/image/blob/61bcb90f0403df804506ccbecebfe13605ae56b4/src/module.ts#L20
Related
I am using CDK to upload an image file from a form-data multivalue request to S3. There are now no errors in the console but what is saved to S3 is a black background with a white sqaure which im sure is down to a corrupt file or something.
Any thoughts as to what I'm doing wrong.
I'm using aws-lambda-multipart-parser to parse the form data.
In my console the form actual image is getting logged like this.
My upload file function looks like this
const uploadFile = async(image: any) => {
const params = {
Bucket: BUCKET_NAME,
Key: image.filename,
Body: image.content,
ContentType: image.contentType,
}
return await S3.putObject(params).promise()
}
When I log the image.content I get a log of the buffer, which seems to be the format i should be uploading the image to.
My CDK stack initialises the S3 contsruct like so.
const bucket = new s3.Bucket(this, "WidgetStore");
bucket.grantWrite(handler);
bucket.grantPublicAccess();
table.grantStreamRead(handler);
handler.addToRolePolicy(lambdaPolicy);
const api = new apigateway.RestApi(this, "widgets-api", {
restApiName: "Widget Service",
description: "This service serves widgets.",
binaryMediaTypes: ['image/png', 'image/jpeg'],
});
Any ideas what I could be missing?
Thanks in advance
Using Cloudinary, just like an image, I would like to limit the width and height of the pdfs that are uploaded.
This is how I upload the file:
const res = await new Promise((resolve, reject) =>
{
let cld_upload_stream = cloud.upload_stream(
{
folder: process.env.CLOUD_FOLDER,
},
function (err, res)
{
if (res)
{
resolve(res);
} else
{
reject(err);
}
}
);
streamifier.createReadStream(file.data).pipe(cld_upload_stream);
});
return {
url: res.url,
location: res.public_id
}
Are there any options to limit the width and height, that can work on pdf files?
I tried:
{ responsive_breakpoints:
{ create_derived: true,
bytes_step: 20000,
min_width: 200,
max_width: 1000 }}
but it deos not seem to work.
The responsive breakpoints feature you mentioned is related to analysing an image and deciding which sizes you should resize it to for a responsive design, balancing possible problems if you choose the sizes manually (which are that you may create 'too many' images with very similar sizes, or have large gaps between the byte sizes of the different sizes, so more bandwidth is used than necessary for often-requested files.)
There's a web interface here that uses that feature and provides examples of what it does:https://www.responsivebreakpoints.com/
This is not related to validating uploaded files or editing the original assets that you upload to Cloudinary. There's no server-side validation available related to the dimensions of an uploaded file, but you could either:
Use an Incoming Transformation to resize the asset before it's saved into your account: https://cloudinary.com/documentation/transformations_on_upload#incoming_transformations
Use the upload API response after the file is uploaded and validated, and if it's "too big", show an error to your user and delete the file again.
You could also use a webhooks notification to receive the uploaded file metadata: https://cloudinary.com/documentation/notifications
I am using Jimp (https://www.npmjs.com/package/jimp) library to crop the image.
Crop is working fine but I only have an issue with image orientation.
Sometimes, user uploaded rotated images and its result rotated cropped images.
I went through with https://www.npmjs.com/package/jimp documentation but couldn't find anything related to this.
Here are couple of links I went through but didn't helped:
https://justmarkup.com/articles/2019-10-21-image-orientation/
Accessing JPEG EXIF rotation data in JavaScript on the client side
Please help
So, long story short: jimp correctly reads images rotated via exif orientation property and rearranges the pixels as if the exif/orientation property didn't exist, but then also stores the old exif property instead of resetting it to 1 as it should for it to be displayed properly on every device.
The simplest solution I was able to implement was using exif-auto-rotate to rotate the image pixels and reset the exif property on the frontend before uploading the (base64 encoded) image to the backend:
import Rotator from 'exif-auto-rotate';
// ...
const [file] = e.target.files;
const image = await Rotator.createRotatedImageAsync(file, "base64")
.catch(err => {
if (err === "Image is NOT have a exif code" || err === "Image is NOT JPEG") {
// just return base64 encoded image if image is not jpeg or contains no exif orientation property
return toBase64(file)
}
// reject if other error
return Promise.reject(err)
});
If you need to do this on the backend then you are probably better off using jpeg-autorotate with buffers as suggested here:
const fileIn = fs.readFileSync('input.jpg')
const jo = require('jpeg-autorotate')
const {buffer} = await jo.rotate(fileIn, {quality: 30})
const image = await Jimp.read(buffer)
More info on browser-based exif orientation issues:
EXIF Orientation Handling Is a Ghetto
just change the jimp version to
"jimp": "0.8.5",
What we do is take a request for an image like "media/catalog/product/3/0/30123/768x/lorem.jpg", then we use the original image located at "media/catalog/product/3/0/30123.jpg", resize it to 768px and webp if the browser supports that and then return the new image (if not already cached).
If you request: wysiwyg/lorem.jpg it will try to create a webp in maximum 1920 pixels (no enlargement).
This seems to work perfectly fine up to <= 1420 pixels wide image. However above that we only get HTTP 502: The Lambda function returned invalid json: The json output is not parsable.
There is a similar issue on SO that relates to GZIP, however as I understand you shouldn't really GZIP images: https://webmasters.stackexchange.com/questions/8382/gzipped-images-is-it-worth/57590#57590
But it's possible that the original image was uploaded to S3 GZIPPED already. But the gzip might be miss-leading because why would it work for smaller images then? We have GZIP disabled in Cloudfront.
I have given the Lamda#Edge Resize function maximum resources 3GB memory and timeout of 30 seconds.. Is this not sufficient for larger images?
I have deleted the already generated images, invalidated Cloudfront but it still behaves the same..
EDIT: UPDATE:
I simply tried a different image and then it works fine.. I have no idea why and how I should solve the broken image then... I guess Cloudfront has cached the 502 now.. I have invalidated using just "*" but didn't help.. Both original files are jpg.
The original source image for the working one is 6.1 MB and non working is 6.7 MB if that matters.
They have these limits:
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
The response.body is about 512 MB when it stops working.
There are some low limits in Lambda, especially in Lambda#Edge on the response size. The limit is 1 MB for the entire response, headers and body included. If lambda function returns a bigger response it will be truncated which can cause HTTP 500 statuses. See documentation.
You can overcome that by saving result image on S3 (or maybe checking first if it's already there), and then instead of returning it just making a 301 redirect to CloudFront distribution integrated with that bucket - so image request will be redirected to result image.
For example in node.js with Origin-Response trigger:
'use strict';
exports.handler = (event, context, callback) => {
// get response
const response = event.Records[0].cf.response;
const headers = response.headers;
// create image and save on S3, generate target_url
// ...
// modify response and headers
response.status = 301;
response.statusDescription = 'Moved Permanently';
headers['Location'] = [{key: 'Location', value: target_url}];
headers['x-reason'] = [{key: 'X-Reason', value: 'Generated.'}];
// return modified response
callback(null, response);
};
Version for simple Lambda Gateway (without Origin-Response, replaces headers):
exports.handler = (event, context, callback) => {
// create image and save on S3, generate target_url
// ...
var response = {
status: 301,
headers: {
Location: [{
key: 'Location',
value: [],
}],
'X-Reason': [{
key: 'X-Reason',
value: '',
}],
},
};
callback(null, response);
}
Additional notes to #Zbyszek's answer, you can roughly estimate if the request is bigger than 1MB like this:
const isRequestBiggerThan1MB = (body, responseWithoutBody) => {
const responseSizeWithoutBody = JSON.stringify(responseWithoutBody).length;
return body.length + responseSizeWithoutBody >= 1000 * 1000;
};
the responseWithoutBody can't be too large or contain "recursive keys" (or what it's called) but in this case I can't imagine that you would have that. If it contains recursive keys then you can simply remove those. If the responseWithoutBody is too large you need to remove those values and measure them separatly - for example like I am doing with the response.body.
I have a Node script that is attempting to do some image manipulation then save the results to S3. The script seems to work, but when I run it the resulting image is just a blank file in s3. I've tried using the result image, the source image etc, just to see if maybe see if it's the image ... I tried Base64 encoding and just passing the image file. Not really sure what the issue is.
var base_image_url = '/tmp/inputFile.jpg';
var change_image_url = './images/frame.png';
var output_file = '/tmp/outputFile.jpg';
var params = {
Bucket: 'imagemagicimages',
Key: 'image_'+num+'.jpg',
ACL: "public-read",
ContentType: 'image/jpeg',
Body: change_image_url
}
s3.putObject(params, function (err, data) {
if (err)
{
console.log(err, err.stack);
} // an error occurred
else
{
callback("it");
console.log(data);
}
});
It looks like this line…
Body: change_image_url
…is saving the string './images/frame.png' to a file. You need to send image data to S3 not a string. You say you are doing image manipulation, but there's no code for that. If you are manipulating and image, then you must have the image data in buffer somewhere. This is what you should be sending to S3.