AWS S3 After Uploading, Image is broken - node.js

Re-question
environment : swift, Nodejs, s3, lambda, aws-serverless-express module
Problem:
After uploading AS multipart Format with Alamofire(multipart/form-data) on swift, The image is broken on the s3 in AWS
code:
let photoKey = value.originalname + insertedReviewId + `_${i}.jpeg`
let photoParam = {
Bucket: bucket,
Key: photoKey,
Body: value.buffer,
ACL: "public-read-write",
ContentType: value.mimetype, /* minetype: image/jpege */
};
//image upload
let resultUploadS3 = await s3.upload(photoParam).promise();
Thanks to read

Self answer
I use aws-serverless-express and for middleware, aws-serverless-express/middleware.
I don't know what is problem, however, I remove aws-serverless-express/middleware module it is work. all of image perfectly upload, not broken file.
if you use aws-serverless-express/middleware, body-parser, multer on Nodejs, let try remove aws-serverless-express/middleware.

Related

Lambda to S3 image upload shows a black background with white square

I am using CDK to upload an image file from a form-data multivalue request to S3. There are now no errors in the console but what is saved to S3 is a black background with a white sqaure which im sure is down to a corrupt file or something.
Any thoughts as to what I'm doing wrong.
I'm using aws-lambda-multipart-parser to parse the form data.
In my console the form actual image is getting logged like this.
My upload file function looks like this
const uploadFile = async(image: any) => {
const params = {
Bucket: BUCKET_NAME,
Key: image.filename,
Body: image.content,
ContentType: image.contentType,
}
return await S3.putObject(params).promise()
}
When I log the image.content I get a log of the buffer, which seems to be the format i should be uploading the image to.
My CDK stack initialises the S3 contsruct like so.
const bucket = new s3.Bucket(this, "WidgetStore");
bucket.grantWrite(handler);
bucket.grantPublicAccess();
table.grantStreamRead(handler);
handler.addToRolePolicy(lambdaPolicy);
const api = new apigateway.RestApi(this, "widgets-api", {
restApiName: "Widget Service",
description: "This service serves widgets.",
binaryMediaTypes: ['image/png', 'image/jpeg'],
});
Any ideas what I could be missing?
Thanks in advance

How to display images of products stored on aws s3 bucket

I was practicing on this tutorial
https://www.youtube.com/watch?v=NZElg91l_ms&t=1234s
It is working absolutely like a charm for me but the thing is I am storing images of products I am storing them in bucket and lets say I upload 4 images they all are uploaded.
but when I am displaying them i got access denied error as I am displaying the list and repeated request are maybe detecting it as a spam
This is how i am trying to fetch them on my react app
//rest of data is from mysql datbase (product name,price)
//100+ products
{ products.map((row)=>{
<div className="product-hero"><img src=`http://localhost:3909/images/${row.imgurl}`</div>
<div className="text-center">{row.productName}</div>
})
}
as it fetch 100+ products from db and 100 images from aws it fails
Sorry for such detailed question but in short how can i fetch all product images from my bucket
Note I am aware that i can get only one image per call so how can I get all images one by one in my scenario
//download code in my app.js
const { uploadFile, getFileStream } = require('./s3')
const app = express()
app.get('/images/:key', (req, res) => {
console.log(req.params)
const key = req.params.key
const readStream = getFileStream(key)
readStream.pipe(res)
})
//s3 file
// uploads a file to s3
function uploadFile(file) {
const fileStream = fs.createReadStream(file.path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
Key: file.filename
}
return s3.upload(uploadParams).promise()
}
exports.uploadFile = uploadFile
// downloads a file from s3
function getFileStream(fileKey) {
const downloadParams = {
Key: fileKey,
Bucket: bucketName
}
return s3.getObject(downloadParams).createReadStream()
}
exports.getFileStream = getFileStream
It appears that your code is sending image requests to your back-end, which retrieves the objects from Amazon S3 and then serves the images in response to the request.
A much better method would be to have the URLs in the HTML page point directly to the images stored in Amazon S3. This would be highly scalable and will reduce the load on your web server.
This would require the images to be public so that the user's web browser can retrieve the images. The easiest way to do this would be to add a Bucket Policy that grants GetObject access to all users.
Alternatively, if you do not wish to make the bucket public, you can instead generate Amazon S3 pre-signed URLs, which are time-limited URLs that provides temporary access to a private object. Your back-end can calculate the pre-signed URL with a couple of lines of code, and the user's web browser will then be able to retrieve private objects from S3 for display on the page.
I did sililar S3 image handling while I handle my blog's image upload functionality, but I did not use getFileStream() to upload my image.
Because nothing should be done until the image file is fully processed, I used fs.readFile(path, callback) instead to read the data.
My way will generate Buffer Data, but AWS S3 is smart enough to know to intercept this as image. (I have only added suffix in my filename, I don't know how to apply image headers...)
This is my part of code for reference:
fs.readFile(imgPath, (err, data) => {
if (err) { throw err }
// Once file is read, upload to AWS S3
const objectParams = {
Bucket: 'yuyuichiu-personal',
Key: req.file.filename,
Body: data
}
S3.putObject(objectParams, (err, data) => {
// store image link and read image with link
}
}

Upload multipart/form-data to S3 from lambda (Nodejs)

I want to upload multipart/form-data file to S3 using Nodejs.
I have tried various approaches but none of them are working. I was able to write content to S3 from lambda but when the file is downloaded from S3, it was corrupted.
Can someone provide me a working example or steps that could help me?
Thanking you in anticipation.
Please suggest another alternative if you think so.
Following is my lambda code:
export const uploadFile = async event => {
const parser = require("lambda-multipart-parser");
const result = await parser.parse(event);
const { content, filename, contentType } = result.files[0];
const params = {
Bucket: "name-of-the-bucket",
Key: filename,
Body: content,
ContentDisposition: `attachment; filename="${filename}";`,
ContentType: contentType,
ACL: "public-read"
};
const res = await s3.upload(params).promise();
return {
statusCode: 200,
body: JSON.stringify({
docUrl: res.Location
})
};
}
If want to upload file through lambda, one way is to open your AWS API Gateway console.
Go to
"API" -> {YourAPI} -> "Settings"
There you will find "Binary Media Types" section.
Add following media type:
multipart/form-data
Save your changes.
Then Go to "Resources" -> "proxy method"(eg. "ANY") -> "Method Request" -> "HTTP Request Headers" and add following headers "Content-Type", "Accept".
Finally deploy your api.
For more info visit: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html
There are 2 possible points of failure - Lambda receives corrupted data or you corrupt data while sending it to S3.
Sending multipart/form-data content to Lambda is not straightforward. You can see how to do that here.
After you did this and you're sure your data is correct in Lambda, check if you send it to S3 correctly (see S3 docs and examples for that).

AWS S3 signed URLs with aws-sdk fails with "AuthorizationQueryParametersError"

I am trying to create a pre-signed URL for a private file test.png on S3.
My code:
var AWS = require('aws-sdk');
AWS.config.region = 'eu-central-1';
const s3 = new AWS.S3();
const key = 'folder/test.png';
const bucket = 'mybucket';
const expiresIn = 2000;
const params = {
Bucket: bucket,
Key: key,
Expires: expiresIn,
};
console.log('params: ', params);
console.log('region: ', AWS.config.region);
var url = s3.getSignedUrl('getObject', params);
console.log('url sync: ', url);
s3.getSignedUrl('getObject', params, function (err, urlX) {
console.log("url async: ", urlX);
});
which returns a URL in the console.
When I try to access it, it shows
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>
Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
</Message>
<RequestId>97377E063D0B1D09</RequestId>
<HostId>
6GE7EdqUvCEJis+fPoWR0Ffp2kN9Mlql4gs+qB4uY3hA4qR2wYrImkZfv05xy4XVjsZnRDVN63s=
</HostId>
</Error>
I am totally stuck and would really appreciate some idea on how to solve it.
i tested your code. i only made modifications to key and bucket. it works. may i know the aws sdk version you are using and the nodejs version you are using? my test was executed on nodejs 8.1.2 and aws-sdk#2.77.0.
I was able to reproduce your error when I executed curl.
curl url (wrong) ->
<Error><Code>AuthorizationQueryParametersError</Code><Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
curl "url" (worked)
if you curl without the double quotes, ampersand is interpreted by the shell as a background process.
Alternatively, you could try pasting the generated link in a browser.
Hope this helps.

S3 file upload stream using node js

I am trying to find some solution to stream file on amazon S3 using node js server with requirements:
Don't store temp file on server or in memory. But up-to some limit not complete file, buffering can be used for uploading.
No restriction on uploaded file size.
Don't freeze server till complete file upload because in case of heavy file upload other request's waiting time will unexpectedly
increase.
I don't want to use direct file upload from browser because S3 credentials needs to share in that case. One more reason to upload file from node js server is that some authentication may also needs to apply before uploading file.
I tried to achieve this using node-multiparty. But it was not working as expecting. You can see my solution and issue at https://github.com/andrewrk/node-multiparty/issues/49. It works fine for small files but fails for file of size 15MB.
Any solution or alternative ?
You can now use streaming with the official Amazon SDK for nodejs in the section "Uploading a File to an Amazon S3 Bucket" or see their example on GitHub.
What's even more awesome, you finally can do so without knowing the file size in advance. Simply pass the stream as the Body:
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body})
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
For your information, the v3 SDK were published with a dedicated module to handle that use case : https://www.npmjs.com/package/#aws-sdk/lib-storage
Took me a while to find it.
Give https://www.npmjs.org/package/streaming-s3 a try.
I used it for uploading several big files in parallel (>500Mb), and it worked very well.
It very configurable and also allows you to track uploading statistics.
You not need to know total size of the object, and nothing is written on disk.
If it helps anyone I was able to stream from the client to s3 successfully (without memory or disk storage):
https://gist.github.com/mattlockyer/532291b6194f6d9ca40cb82564db9d2a
The server endpoint assumes req is a stream object, I sent a File object from the client which modern browsers can send as binary data and added file info set in the headers.
const fileUploadStream = (req, res) => {
//get "body" args from header
const { id, fn } = JSON.parse(req.get('body'));
const Key = id + '/' + fn; //upload to s3 folder "id" with filename === fn
const params = {
Key,
Bucket: bucketName, //set somewhere
Body: req, //req is a stream
};
s3.upload(params, (err, data) => {
if (err) {
res.send('Error Uploading Data: ' + JSON.stringify(err) + '\n' + JSON.stringify(err.stack));
} else {
res.send(Key);
}
});
};
Yes putting the file info in the headers breaks convention but if you look at the gist it's much cleaner than anything else I found using streaming libraries or multer, busboy etc...
+1 for pragmatism and thanks to #SalehenRahman for his help.
I'm using the s3-upload-stream module in a working project here.
There is also some good examples from #raynos in his http-framework repository.
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is an example of streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
Additionally this library is also isomorphic, can be used in browsers as well.

Resources