AWS S3 Uploaded Image is Partially Loaded - node.js

I am trying to upload a locally stored image from my Node.js project's file structure using the aws-sdk package to my AWS S3 bucket and am able to successfully upload it, however, the uploaded image is a partially rendered version of the image. Only the top 1% (12kb) of it are visible when I view the URL created by AWS for the image. I've logged out the file to the console and made sure it was what I thought it was, and it is. But for some reason when I upload it to S3, it's a truncated / cut off version of the image.
All of the tutorials seem pretty straight forward but nobody seems to mention this problem. I've been grappling with it for hours but nothing seems to work. I've tried everything I can find online like:
Using fs.createReadStream(fileName) instead of just the file buffer but that didn't work (from Image file cut off when uploading to AWS S3 bucket via Django and Boto3)
Converting the buffer to base64 string and sending it that way
Adding the ContentLength param
Adding the ContentType to be the exact type of the image
Here's the relevant code:
const aws = require("aws-sdk")
const { infoLogger } = require("./logger")
async function uploadCoverImage() {
try {
aws.config.update({
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
region: "us-east-2",
})
const s3 = new aws.S3()
fs.readFile("cover.jpg", (error, image) => {
if (error) throw error
const params = {
Bucket: process.env.BUCKET_NAME,
Key: "cover.jpg",
Body: image,
ACL: "public-read",
ContentType: "image/jpg",
}
s3.upload(params, (error, res) => {
if (error) throw error
console.log(`${JSON.stringify(res)}`)
})
})
} catch (error) {
infoLogger.error(`Error reading cover file: ${JSON.stringify(error)}`)
}
}
module.exports = uploadCoverImage

I found out that it was uploading before the image had finished downloading via fs.createReadStream() in a different part of my codebase which is why it was partially loaded in S3. I never noticed because I only ever saw the fully loaded image in my local file system.

Related

Uploading image to Amazon S3 bucket

I am uploading a single image (upload.single('file') is Multer middleware.), Koa framework for nodeJs. Everything works on localhost but not on the prod environment. Here is the the path for the prod environment.
enter image description here
Prod env: Cliet-> Api gateway-> NLB -> service to S3 bucket.
localhost env: Cliet-> service to S3 bucket.
The problem is the size of the image increased when it reaches to service before uploading to S3 bucket and it will be distorted image (237kb increases to 418kb). But using localhost all works fine. I used postman as a client for uploading image.
async create(ctx: Context, values: any): Promise<any> {
// The image size increased before sending to S3 bucket
console.log('REQUEST----', values.file)
const params = {
Bucket: env.Bucket,
Key: `images/${values.file.originalname}`,
Body: values.file.buffer,
ACL: 'public-read',
ContentType: values.file.mimetype,
ContentLength: values.file.size,
}
try {
const data = await s3.putObject(params).promise();
console.log('Successfully uploaded file.', data);
return true;
} catch(e) {
ctx.throw(HttpStatusCodes.InternalServerError, {
message: `Something went wrong when updating advertisement` + e,
});
}
}

node js get image url

i want to
1-choose an image from my filesystem and upload it to server/local
2- get its url back using node js service . i managed to do step 1 and now i want to get the image url instead of getting the success message in res.end
here is my code
app.post("/api/Upload", function(req, res) {
upload(req, res, function(err) {
if (err) {
return res.end("Something went wrong!");
}
return res.end("File uploaded sucessfully!.");
});
});
i'm using multer to upload the image.
You can do something like this, using AWS S3 and it returns the url of the image uploaded
const AWS = require('aws-sdk')
AWS.config.update({
accessKeyId: <AWS_ACCESS_KEY>,
secretAccessKey: <AWS_SECRET>
})
const uploadImage = file => {
const replaceFile = file.data_uri.replace(/^data:image\/\w+;base64,/, '')
const buf = new Buffer(replaceFile, 'base64')
const s3 = new AWS.S3()
s3.upload({
Bucket: <YOUR_BUCKET>,
Key: <NAME_TO_SAVE>,
Body: buf,
ACL: 'public-read'
}, (err, data) => {
if (err) throw err;
return data.Location; // this is the URL
})
}
also you can check this express generator, which has the route to upload images to AWS S3 https://www.npmjs.com/package/speedbe
I am assuming that you are saving the image on the server file system and not a Storage solution like AWS S3 or Google Cloud Storage, where you get the url after upload.
Since, you are storing it on the filesystem, you can rename the file with a unique identifier like uuid or something else.
Then you can make a GET route and request that ID in query or path parameter and then read the file having that ID as the name and send it back.

Storing images from node to aws s3 to be deployed in heroku

Hi I would like to store images in amazon s3. I am making a react application with node js and express at the back end. I have a code which is saving the images locally, in images folder as desired. I am using jimp library to convert the images into black and white. What i want is to store these black and white images directly to aws instead of saving to local hdd. I need to do this because in the end the app has to be deployed to heroku, and heroku is not able to read images from local hdd.
Here is the code through which i was able to store images in a particular directory as required.
const input = req.body.input;
google.list({
keyword: input,
num: 15,
detail: true,
})
.then(function (res) {
res.map((data,index)=>{
const url = data.url;
const extension = url.split('.')[url.split('.').length-1]
const foldername=input
Jimp.read(url, function (err, image) {
image.resize(250, 250)
.greyscale()
.write(path.join(__dirname,"../../public/images/"+foldername+"/"+foldername+index+"."+extension));
});
});
})
}).catch(function(err) {
res.send('There was some error')
})
I need to store images in the same path ie., awsbucketname/foldername/foldername.jpg. I tried converting the image to buffer but still i don't understand how to proceed with it. Some one please help me :(
(Disclaimer: I have no practical experience with Jimp!)
It seems like you are on the right track with writing the image to a buffer instead of a local file. Once you have initialized the AWS SDK and instantiated the S3 interface, it should be easy to pass the buffer to the upload function. Something along the lines of:
const s3 = new AWS.S3({ params: { Bucket: 'yourBucketName' } });
// ...
Jimp.read(url, (err, image) => {
const bucketPath = `/${foldername}/${index}.${extension}`;
image.resize(250, 250)
.greyscale()
.getBuffer(Jimp.AUTO).then(buffer => {
s3.upload({ Key: bucketPath, Body: buffer })
.then(() => console.log('yay!'));
});
}
);
This is just a sketch of course, missing error handling etc.

Resizing the image in S3 bucket from lambda trigger using nodejs

I newbie to nodejs and aws, Can anyone point out whats wrong with the following code to resize the images in s3 bucket
Program as follows
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3({
accessKeyId: "xxxxxxxxxxxx",
secretAccessKey: "yyyyyyyyyyy",
region: "us-east-1",
signatureVersion: 'v4',
});
const Sharp = require('sharp');
const BUCKET = "patientimg";
const URL = "https://s3.ap-south-1.amazonaws.com";
exports.handler = function(event, context, callback) {
const key = event.queryStringParameters.key;
const match = key.match(/(\d+)x(\d+)\/(.*)/);
const width = parseInt(match[1], 10);
const height = parseInt(match[2], 10);
const originalKey = match[3];
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.toFormat('png')
.toBuffer()
)
.then(buffer => S3.putObject({
Body: buffer,
Bucket: BUCKET,
ContentType: "image/png",
Key: key,
}).promise()
)
.then(() => callback(null, {
statusCode: '301',
headers: {'location': "${URL}/${key}"},
body: "",
})
)
.catch(err => callback(err))
}
this is my exact code I'm using,
output from lambda when testing with "S3 put" request
{
"errorMessage": "RequestId: edaddaf7-4c5e-11e7-bed8-13f72aaa5d38 Process exited before completing request"
}
Thanks in advance
Resizing images using a lambda is a classic example that has been well explained by the AWS team. Follow their instructions, not something else.
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/
The correct resizing code is: http://github.com/awslabs/serverless-image-resizing. Whatever you found is probably wrong.
Basically it works like this:
Upload this code as your lambda.
Go to the triggers tab of your lambda and copy the URL
Go to your s3 bucket and set up a redirection rule: on 404, redirect to the lambda URL. The image will be automatically resized when requested.
All of these steps are well documented in detail at the AWS blog above. The benefit of their approach is that the resized image is not created until it is actually needed, which saves on resources.
You can use this AWS Lambda image resizer.
It's built with Node.js and with options to build your own settings. You just need to follow the steps here.

File uploading in Amazon S3 with Node.js

I am using aws-sdk to upload files on Amazon S3. It is working fine and uploading files, but my problem is; it changed file name after uploaded to the server. For example, if I upload sample.jpg, and it renamed to something like b4c743c8a2332525.jpg. Here is my code.
AWS.config.update({
accessKeyId: key,
secretAccessKey: secret
});
var fileStream = fs.createReadStream(path);
fileStream.on('error', function (err) {
if (err) { throw err; }
});
fileStream.on('open', function () {
var s3 = new AWS.S3();
s3.putObject({
Bucket: bucket,
Key: directory + file,
Body: fileStream
}, function (err) {
if (err)
res.send(err);
fs.unlinkSync(path);
});
});
Is it normal to change file name after uploaded files to S3 server, or is there any options to upload the same file name? Thank you.
Neither S3 nor the AWS SDK pick arbitrary file names for things you upload. The names are set by your own code.
Check the value of directory + file when you set it as the S3 object key. You may be uploading 'sample.jpg' from your browser (so the file is called sample.jpg locally on your disk), but the temporary file name that node.js uses to identify the file on it's disk may be using a hash like b4c743c8a2332525.jpg.

Resources