Lambda to S3 image upload shows a black background with white square - node.js

I am using CDK to upload an image file from a form-data multivalue request to S3. There are now no errors in the console but what is saved to S3 is a black background with a white sqaure which im sure is down to a corrupt file or something.
Any thoughts as to what I'm doing wrong.
I'm using aws-lambda-multipart-parser to parse the form data.
In my console the form actual image is getting logged like this.
My upload file function looks like this
const uploadFile = async(image: any) => {
const params = {
Bucket: BUCKET_NAME,
Key: image.filename,
Body: image.content,
ContentType: image.contentType,
}
return await S3.putObject(params).promise()
}
When I log the image.content I get a log of the buffer, which seems to be the format i should be uploading the image to.
My CDK stack initialises the S3 contsruct like so.
const bucket = new s3.Bucket(this, "WidgetStore");
bucket.grantWrite(handler);
bucket.grantPublicAccess();
table.grantStreamRead(handler);
handler.addToRolePolicy(lambdaPolicy);
const api = new apigateway.RestApi(this, "widgets-api", {
restApiName: "Widget Service",
description: "This service serves widgets.",
binaryMediaTypes: ['image/png', 'image/jpeg'],
});
Any ideas what I could be missing?
Thanks in advance

Related

How to upload modified PDF file to AWS s3 from AWS Lambda

I've a requirement to
Download a PDF file from AWS S3 storage. (Key1)
Do some modifications.
Upload the modified PDF file back to S3 storage. (Key2)
The Uploaded file is a new file (K2). Not overwriting the existing file (K1)
Library used for modifying PDFs : pdf-lib
All the executions like downloading/modification/uploading of PDF are being done in AWS Lambda. The runtime is node.js 14.x
The objects in S3 bucket can be accessed through CDN as public access is blocked.
I'm able to download the file, then do the modifications and upload to S3. But when I open the file using CDN URL for the object, it is showing encoded text (garbage). Not the PDF preview of the file.
Downloading PDF file from S3.
const params = {
Bucket: bucket_name,
Key: key
};
// GET FILE AND RETURN PROMISE.
return new Promise((resolve, reject) => {
s3.getObject(params, (err, data) => {
if (err) {
reject(err);
}
try {
const obj = data.Body; // <<-- getting Uint8Array
resolve(obj);
} catch (e) {
reject(err);
}
});
});
Doing Modification on PDF file
async modificationFunction(opts) => {
const { fileData } = opts; //<<---- Unit8Array data from above snippet.
const pdfDoc = await PDFDocument.load(fileData);
// Do Some Modification like drawing lines.
const modifiedPDFData = await pdfDoc.saveAsBase64({ dataUri: true });
return modifiedPDFData; //<<--- Base64 data of modifications.
}
Uploading PDF file
const params = {
Bucket: bucket_name,
Key: key,
Body: data, //<<--- Base64 data of modification from above snippet
};
try {
await s3.upload(params).promise();
console.log('File uploaded:', `s3://${bucket_name}/${key}`);
}
Content of the PDF when viewed using CDN URL is attached. It is encoded/garbage content.
Same PDF when downloaded to laptop from AWS S3 using manual download from S3 bucket is showing the contents properly like a normal PDF file.
Referenced many online resources/stackoverflow threads:
link1
link2 Using the AWS SDK in javascript.
Tried ways with save() and saveAsBase64() methods of the pdf-lib nodejs library.
Tried to save the modified file locally. Upload this file manually to AWS S3 and access through CDN. Able to view the PDF properly this way. So there is some issue with how the file is uploaded to S3.
The issue was not with PDF file download, modification, upload operations. Actually the CDN had a caching policy due to which the initially generated garbage content files were getting served on further requests. After clearing the cache and trying again the files were properly viewable with the CDN URL.

How to display images of products stored on aws s3 bucket

I was practicing on this tutorial
https://www.youtube.com/watch?v=NZElg91l_ms&t=1234s
It is working absolutely like a charm for me but the thing is I am storing images of products I am storing them in bucket and lets say I upload 4 images they all are uploaded.
but when I am displaying them i got access denied error as I am displaying the list and repeated request are maybe detecting it as a spam
This is how i am trying to fetch them on my react app
//rest of data is from mysql datbase (product name,price)
//100+ products
{ products.map((row)=>{
<div className="product-hero"><img src=`http://localhost:3909/images/${row.imgurl}`</div>
<div className="text-center">{row.productName}</div>
})
}
as it fetch 100+ products from db and 100 images from aws it fails
Sorry for such detailed question but in short how can i fetch all product images from my bucket
Note I am aware that i can get only one image per call so how can I get all images one by one in my scenario
//download code in my app.js
const { uploadFile, getFileStream } = require('./s3')
const app = express()
app.get('/images/:key', (req, res) => {
console.log(req.params)
const key = req.params.key
const readStream = getFileStream(key)
readStream.pipe(res)
})
//s3 file
// uploads a file to s3
function uploadFile(file) {
const fileStream = fs.createReadStream(file.path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
Key: file.filename
}
return s3.upload(uploadParams).promise()
}
exports.uploadFile = uploadFile
// downloads a file from s3
function getFileStream(fileKey) {
const downloadParams = {
Key: fileKey,
Bucket: bucketName
}
return s3.getObject(downloadParams).createReadStream()
}
exports.getFileStream = getFileStream
It appears that your code is sending image requests to your back-end, which retrieves the objects from Amazon S3 and then serves the images in response to the request.
A much better method would be to have the URLs in the HTML page point directly to the images stored in Amazon S3. This would be highly scalable and will reduce the load on your web server.
This would require the images to be public so that the user's web browser can retrieve the images. The easiest way to do this would be to add a Bucket Policy that grants GetObject access to all users.
Alternatively, if you do not wish to make the bucket public, you can instead generate Amazon S3 pre-signed URLs, which are time-limited URLs that provides temporary access to a private object. Your back-end can calculate the pre-signed URL with a couple of lines of code, and the user's web browser will then be able to retrieve private objects from S3 for display on the page.
I did sililar S3 image handling while I handle my blog's image upload functionality, but I did not use getFileStream() to upload my image.
Because nothing should be done until the image file is fully processed, I used fs.readFile(path, callback) instead to read the data.
My way will generate Buffer Data, but AWS S3 is smart enough to know to intercept this as image. (I have only added suffix in my filename, I don't know how to apply image headers...)
This is my part of code for reference:
fs.readFile(imgPath, (err, data) => {
if (err) { throw err }
// Once file is read, upload to AWS S3
const objectParams = {
Bucket: 'yuyuichiu-personal',
Key: req.file.filename,
Body: data
}
S3.putObject(objectParams, (err, data) => {
// store image link and read image with link
}
}

AWS lambda function issue with FormData file upload

I have a nodejs code which uploads files to S3 bucket.
I have used koa web framework and following are the dependencies:
"#types/koa": "^2.0.48",
"#types/koa-router": "^7.0.40",
"koa": "^2.7.0",
"koa-body": "^4.1.0",
"koa-router": "^7.4.0",
following is my sample router code:
import Router from "koa-router";
const router = new Router({ prefix: '/' })
router.post('file/upload', upload)
async function upload(ctx: any, next: any) {
const files = ctx.request.files
if(files && files.file) {
const extension = path.extname(files.file.name)
const type = files.file.type
const size = files.file.size
console.log("file Size--------->:: " + size);
sendToS3();
}
}
function sendToS3() {
const params = {
Bucket: bName,
Key: kName,
Body: imageBody,
ACL: 'public-read',
ContentType: fileType
};
s3.upload(params, function (error: any, data: any) {
if (error) {
console.log("error", error);
return;
}
console.log('s3Response', data);
return;
});
}
The request body is sent as FormData.
Now when I run this code locally and hit the request, the file gets uploaded to my S3 bucket and can be viewed.
In Console the file size is displayed as follows:
which is the correct actual size of the file.
But when I deploy this code as lambda function and hit the request then I see that the file size has suddenly increased(cloudwatch log screenshot below).
Still that file gets uploaded to S3 but the issue is when I open the file it show following error.
I further tried to find whether this behaviour persisted on standalone instance on aws. But it did not. So the problem occurs only when the code is deployed as a serverless lambda function.
I tried with postman as well as my own front end app. But the issue remains.
I don't know whether I have overlooked any configuration when setting up the lambda function that handles such scenarios.
This is an unprecedented issue I have encountered, and really would want to know if any one else encountered same before. Also I am not able to debug and find why the file size is increasing. I can only assume that when the file reaches the service, some kind of encoding/padding is being done on the file.
Finally was able to fix this issue. Had to add "Binary Media Type" in AWS API Gateway
Following steps helped.
AWS API Gateway console -> "API" -> "Settings" -> "Binary Media Types" section.
Added following media type:
multipart/form-data
Save changes.
Deploy api.
Info location: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html

Upload multipart/form-data to S3 from lambda (Nodejs)

I want to upload multipart/form-data file to S3 using Nodejs.
I have tried various approaches but none of them are working. I was able to write content to S3 from lambda but when the file is downloaded from S3, it was corrupted.
Can someone provide me a working example or steps that could help me?
Thanking you in anticipation.
Please suggest another alternative if you think so.
Following is my lambda code:
export const uploadFile = async event => {
const parser = require("lambda-multipart-parser");
const result = await parser.parse(event);
const { content, filename, contentType } = result.files[0];
const params = {
Bucket: "name-of-the-bucket",
Key: filename,
Body: content,
ContentDisposition: `attachment; filename="${filename}";`,
ContentType: contentType,
ACL: "public-read"
};
const res = await s3.upload(params).promise();
return {
statusCode: 200,
body: JSON.stringify({
docUrl: res.Location
})
};
}
If want to upload file through lambda, one way is to open your AWS API Gateway console.
Go to
"API" -> {YourAPI} -> "Settings"
There you will find "Binary Media Types" section.
Add following media type:
multipart/form-data
Save your changes.
Then Go to "Resources" -> "proxy method"(eg. "ANY") -> "Method Request" -> "HTTP Request Headers" and add following headers "Content-Type", "Accept".
Finally deploy your api.
For more info visit: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html
There are 2 possible points of failure - Lambda receives corrupted data or you corrupt data while sending it to S3.
Sending multipart/form-data content to Lambda is not straightforward. You can see how to do that here.
After you did this and you're sure your data is correct in Lambda, check if you send it to S3 correctly (see S3 docs and examples for that).

How do I save a local image to S3 from my Lambda function in Node

I have a Node script that is attempting to do some image manipulation then save the results to S3. The script seems to work, but when I run it the resulting image is just a blank file in s3. I've tried using the result image, the source image etc, just to see if maybe see if it's the image ... I tried Base64 encoding and just passing the image file. Not really sure what the issue is.
var base_image_url = '/tmp/inputFile.jpg';
var change_image_url = './images/frame.png';
var output_file = '/tmp/outputFile.jpg';
var params = {
Bucket: 'imagemagicimages',
Key: 'image_'+num+'.jpg',
ACL: "public-read",
ContentType: 'image/jpeg',
Body: change_image_url
}
s3.putObject(params, function (err, data) {
if (err)
{
console.log(err, err.stack);
} // an error occurred
else
{
callback("it");
console.log(data);
}
});
It looks like this line…
Body: change_image_url
…is saving the string './images/frame.png' to a file. You need to send image data to S3 not a string. You say you are doing image manipulation, but there's no code for that. If you are manipulating and image, then you must have the image data in buffer somewhere. This is what you should be sending to S3.

Resources