What we do is take a request for an image like "media/catalog/product/3/0/30123/768x/lorem.jpg", then we use the original image located at "media/catalog/product/3/0/30123.jpg", resize it to 768px and webp if the browser supports that and then return the new image (if not already cached).
If you request: wysiwyg/lorem.jpg it will try to create a webp in maximum 1920 pixels (no enlargement).
This seems to work perfectly fine up to <= 1420 pixels wide image. However above that we only get HTTP 502: The Lambda function returned invalid json: The json output is not parsable.
There is a similar issue on SO that relates to GZIP, however as I understand you shouldn't really GZIP images: https://webmasters.stackexchange.com/questions/8382/gzipped-images-is-it-worth/57590#57590
But it's possible that the original image was uploaded to S3 GZIPPED already. But the gzip might be miss-leading because why would it work for smaller images then? We have GZIP disabled in Cloudfront.
I have given the Lamda#Edge Resize function maximum resources 3GB memory and timeout of 30 seconds.. Is this not sufficient for larger images?
I have deleted the already generated images, invalidated Cloudfront but it still behaves the same..
EDIT: UPDATE:
I simply tried a different image and then it works fine.. I have no idea why and how I should solve the broken image then... I guess Cloudfront has cached the 502 now.. I have invalidated using just "*" but didn't help.. Both original files are jpg.
The original source image for the working one is 6.1 MB and non working is 6.7 MB if that matters.
They have these limits:
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
The response.body is about 512 MB when it stops working.
There are some low limits in Lambda, especially in Lambda#Edge on the response size. The limit is 1 MB for the entire response, headers and body included. If lambda function returns a bigger response it will be truncated which can cause HTTP 500 statuses. See documentation.
You can overcome that by saving result image on S3 (or maybe checking first if it's already there), and then instead of returning it just making a 301 redirect to CloudFront distribution integrated with that bucket - so image request will be redirected to result image.
For example in node.js with Origin-Response trigger:
'use strict';
exports.handler = (event, context, callback) => {
// get response
const response = event.Records[0].cf.response;
const headers = response.headers;
// create image and save on S3, generate target_url
// ...
// modify response and headers
response.status = 301;
response.statusDescription = 'Moved Permanently';
headers['Location'] = [{key: 'Location', value: target_url}];
headers['x-reason'] = [{key: 'X-Reason', value: 'Generated.'}];
// return modified response
callback(null, response);
};
Version for simple Lambda Gateway (without Origin-Response, replaces headers):
exports.handler = (event, context, callback) => {
// create image and save on S3, generate target_url
// ...
var response = {
status: 301,
headers: {
Location: [{
key: 'Location',
value: [],
}],
'X-Reason': [{
key: 'X-Reason',
value: '',
}],
},
};
callback(null, response);
}
Additional notes to #Zbyszek's answer, you can roughly estimate if the request is bigger than 1MB like this:
const isRequestBiggerThan1MB = (body, responseWithoutBody) => {
const responseSizeWithoutBody = JSON.stringify(responseWithoutBody).length;
return body.length + responseSizeWithoutBody >= 1000 * 1000;
};
the responseWithoutBody can't be too large or contain "recursive keys" (or what it's called) but in this case I can't imagine that you would have that. If it contains recursive keys then you can simply remove those. If the responseWithoutBody is too large you need to remove those values and measure them separatly - for example like I am doing with the response.body.
Related
I have a reactJS frontend with a file upload component using NGINX to serve the frontend, that sends file data to my API and gets a response back. When I select a file larger than 1mb it never uploads the full data , only 1mb worth.
Example:
I select a file of 1.6MB in size.
my file data state:
const [fileData, setFileData] = useState("");
This state returns a length of 1600202
I append the data for formData:
const formData = new FormData();
formData.append("file", fileData);
formData.append("name", fileName);
The size of the formData is 1600210
I send the data to the API:
await axios({
url: `${baseUrl}/test`,
method: "POST",
headers: {
//"Content-Type": "application/json",
Authorization: globalState.sessionToken,
},
responseType: "arraybuffer",
data: formData,
})
The size on the network tab shows 1MB sent, along with the response size being 1048612 and the ArrayBuffer size being 1048612.
I know on the API side I have increased the upload limit to 5MB, but clearly something on the frontend is cutting data off at 1MB before it even reaches the server.
Is there a configuration I should be setting to allow larger file sizes?
Could you provide some more info, does backend throw any errors, do you use some kind of library for parsing files (multer, formidable...)? Frontend size limits are only there if you manually check file size or if you set axios body max size. This is more of a comment but since I can't comment yet because of reputation I am posting it as an answer, sorry.
Using Cloudinary, just like an image, I would like to limit the width and height of the pdfs that are uploaded.
This is how I upload the file:
const res = await new Promise((resolve, reject) =>
{
let cld_upload_stream = cloud.upload_stream(
{
folder: process.env.CLOUD_FOLDER,
},
function (err, res)
{
if (res)
{
resolve(res);
} else
{
reject(err);
}
}
);
streamifier.createReadStream(file.data).pipe(cld_upload_stream);
});
return {
url: res.url,
location: res.public_id
}
Are there any options to limit the width and height, that can work on pdf files?
I tried:
{ responsive_breakpoints:
{ create_derived: true,
bytes_step: 20000,
min_width: 200,
max_width: 1000 }}
but it deos not seem to work.
The responsive breakpoints feature you mentioned is related to analysing an image and deciding which sizes you should resize it to for a responsive design, balancing possible problems if you choose the sizes manually (which are that you may create 'too many' images with very similar sizes, or have large gaps between the byte sizes of the different sizes, so more bandwidth is used than necessary for often-requested files.)
There's a web interface here that uses that feature and provides examples of what it does:https://www.responsivebreakpoints.com/
This is not related to validating uploaded files or editing the original assets that you upload to Cloudinary. There's no server-side validation available related to the dimensions of an uploaded file, but you could either:
Use an Incoming Transformation to resize the asset before it's saved into your account: https://cloudinary.com/documentation/transformations_on_upload#incoming_transformations
Use the upload API response after the file is uploaded and validated, and if it's "too big", show an error to your user and delete the file again.
You could also use a webhooks notification to receive the uploaded file metadata: https://cloudinary.com/documentation/notifications
I want to read a file which is in a remote location. Let's say https://abc/image.jpeg or https://abc/image.png. And I need to read this file and send it back as a response from a lambda function. One solution in NodeJS express is to use res.sendFile but I am not sure whether I can use it in a lambda and how to do that.
Another alternative is first copy image to a s3 bucket and then send it back. Any suggestions those are better than s3 copy option ?
You can leverage axios and API Gateway isBase64Encoded option.
First, request the image and convert it to base64, using Buffer:
const imageBase64 = await axios.get(url, {responseType: 'arraybuffer'})
.then(response => Buffer.from(response.data, 'binary').toString('base64'));
Next, return it from your lambda through API Gateway:
return {
statusCode: 200,
body: imageBase64,
isBase64Encoded: true, //the most important part
}
However, keep in mind that API Gateway allows up to 10 megabytes of payload size. You'll get an error if your images are bigger.
With request and express :
var request = require("request");
request.get('https://www.example.com/static/img/logo-light.png').pipe(res);
I'm trying to send a multipart/form-data image to another server, using unirest. Their .attach() works with a fs.createReadStream(), however, I have not been able to convert the buffer to an image. The logical step is to convert the buffer to Uint8Array first, then creating a read stream from it. However, this throws up an error message saying that the array must not contain null values. Removing 0 entries from the array will almost certainly break the image.
Image is not null, has all the bytes, and even sending the image data as a giant string works.
Here's what I tried:
imageBytes = new Uint8Array(image.buffer)
unirest
.post(someURL)
.headers(headers)
.attach("image", fs.createReadStream(imageBytes))
.end(response => {
console.log(response.body);
});
The alternatives are:
1. Attaching the buffer directly, which sends the raw data as a form field. Not ideal, and might run into image size restrictions.
2. Write the file to storage instead of keeping it in memory. This would be handling some sensitive information, thus would require auto-deletion post a certain amount of time, leading to more work.
EDIT: I ended up switching to request, as that allowed inline 'files' from buffers. The code to do so is below:
request({
uri: someURL,
method: "POST",
formData: {
"image": {
value: image.buffer,
options: {
filename: image.originalname,
contentType: image.mimetype
}
}
}
}, (err, resp, body) => {
if (err) {
console.log("ERROR -> " + err)
}
if (body) {
console.log(body)
}
})
EDIT2: Please also put in encoding: null in the request options if you follow this. Don't be like me and spend a day tracking down why your returned data is of an alien format. :)
I have a small webapp built with nodejs and express (among other things) that has a route to resize images on the fly (using sharp). The route looks like this:
router.get('/image/:options/:basedir/:dir/:img', utilitiesController.getOptimizedImage);
In the utilities controller, I have the getOptimizedImage function checking for the existing image, returning the existing image content if it exists, or if it doesn't, performing some image processing tasks, then returning the resulting image.
exports.getOptimizedImage = async (req, res) => {
// Parse options from request...
// first, check to see if resized version exists
fs.readFile(processedImgPath, function (err, processedImg) {
if (err) {
//console.log('File does not yet exist.');
// If resized version doesn't exist, check for original
fs.readFile(originImgPath, function (err, originImg) {
if (err) {
// If origin image doesn't exist, return 400.
} else if (w || h) {
// If origin image does exist, process it...
// Once it's processed, return the processed image
res.end(newImg);
return;
}
}
} else {
//
res.end(processedImg);
//res.redirect(existingFileUrl);
return;
}
}
}
This code works. I can request something like so:
<img src="http://example.com/image/w800/foo/bar/imagename.jpg">
...and it returns the resized image as expected. The issue seems to be that because of the way the image is returned using res.end(), the browser cache (testing in Chrome) doesn't ever store the image, so reloading the page downloads the image fresh instead of loading it from memory or disk.
I can alternatively use res.redirect() to send back the url of the existing processed image file, which will be cached on refresh, but that feels like the wrong way to do this, since it ultimately doubles all the image requests using the image processing path.
I don't want to process the images prior to request; I'm specifically looking to only process the images the first time they're requested, then store a processed version to reference each consecutive time. I'm open to alternatives within these constraints, but hoping someone can explain how I might leverage browser caching with my current structure?
You should add http headers for caching before any res.end, the example below will set the Expires time to 1 day (86400000ms).
res.set({
"Cache-Control": "public, max-age=86400",
"Expires": new Date(Date.now() + 86400000).toUTCString()
})