I newbie to nodejs and aws, Can anyone point out whats wrong with the following code to resize the images in s3 bucket
Program as follows
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3({
accessKeyId: "xxxxxxxxxxxx",
secretAccessKey: "yyyyyyyyyyy",
region: "us-east-1",
signatureVersion: 'v4',
});
const Sharp = require('sharp');
const BUCKET = "patientimg";
const URL = "https://s3.ap-south-1.amazonaws.com";
exports.handler = function(event, context, callback) {
const key = event.queryStringParameters.key;
const match = key.match(/(\d+)x(\d+)\/(.*)/);
const width = parseInt(match[1], 10);
const height = parseInt(match[2], 10);
const originalKey = match[3];
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.toFormat('png')
.toBuffer()
)
.then(buffer => S3.putObject({
Body: buffer,
Bucket: BUCKET,
ContentType: "image/png",
Key: key,
}).promise()
)
.then(() => callback(null, {
statusCode: '301',
headers: {'location': "${URL}/${key}"},
body: "",
})
)
.catch(err => callback(err))
}
this is my exact code I'm using,
output from lambda when testing with "S3 put" request
{
"errorMessage": "RequestId: edaddaf7-4c5e-11e7-bed8-13f72aaa5d38 Process exited before completing request"
}
Thanks in advance
Resizing images using a lambda is a classic example that has been well explained by the AWS team. Follow their instructions, not something else.
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/
The correct resizing code is: http://github.com/awslabs/serverless-image-resizing. Whatever you found is probably wrong.
Basically it works like this:
Upload this code as your lambda.
Go to the triggers tab of your lambda and copy the URL
Go to your s3 bucket and set up a redirection rule: on 404, redirect to the lambda URL. The image will be automatically resized when requested.
All of these steps are well documented in detail at the AWS blog above. The benefit of their approach is that the resized image is not created until it is actually needed, which saves on resources.
You can use this AWS Lambda image resizer.
It's built with Node.js and with options to build your own settings. You just need to follow the steps here.
Related
I'm trying to upload a file to S3 but the file size is too large and we need to do it very frequently . So I was looking for if there is any option to upload a file to S3 using nodejs , without reading the content of the file wholly.
The below code was working fine , but it was reading the file each time I want to upload .
const aws = require("aws-sdk");
aws.config.update({
secretAccessKey: process.env.ACCESS_SECRET,
accessKeyId: process.env.ACCESS_KEY,
region: process.env.REGION,
});
const BUCKET = process.env.BUCKET;
const s3 = new aws.S3();
const fileName = "logs.txt";
const uploadFile = () => {
fs.readFile(fileName, (err, data) => {
if (err) throw err;
const params = {
Bucket: BUCKET, // pass your bucket name
Key: fileName, // file will be saved as testBucket/contacts.csv
Body: JSON.stringify(data, null, 2),
};
s3.upload(params, function (s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};
uploadFile();
You can make use of streams.
First create a readStream of the file you want to upload. You then can pipe it to aws s3 by passing it as Body.
import { createReadStream } from 'fs';
const inputStream = createReadStream('sample.txt');
s3
.upload({ Key: fileName, Body: inputStream, Bucket: BUCKET })
.promise()
.then(console.log, console.error)
You can use multipart upload:
AWS article:
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
SO Question about the same for python: Can I stream a file upload to S3 without a content-length header?
JS API reference manual: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3/ManagedUpload.html
The basic example is:
var upload = new AWS.S3.ManagedUpload({
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
so you have to provide a stream as an input.
const readableStream = fs.createReadStream(filePath);
JS api documented here: https://nodejs.org/api/fs.html#fscreatereadstreampath-options
Of course, you can process the data while reading it and then pass it to the S3 API, you just have to implement the Stream API.
I am trying to upload a locally stored image from my Node.js project's file structure using the aws-sdk package to my AWS S3 bucket and am able to successfully upload it, however, the uploaded image is a partially rendered version of the image. Only the top 1% (12kb) of it are visible when I view the URL created by AWS for the image. I've logged out the file to the console and made sure it was what I thought it was, and it is. But for some reason when I upload it to S3, it's a truncated / cut off version of the image.
All of the tutorials seem pretty straight forward but nobody seems to mention this problem. I've been grappling with it for hours but nothing seems to work. I've tried everything I can find online like:
Using fs.createReadStream(fileName) instead of just the file buffer but that didn't work (from Image file cut off when uploading to AWS S3 bucket via Django and Boto3)
Converting the buffer to base64 string and sending it that way
Adding the ContentLength param
Adding the ContentType to be the exact type of the image
Here's the relevant code:
const aws = require("aws-sdk")
const { infoLogger } = require("./logger")
async function uploadCoverImage() {
try {
aws.config.update({
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
region: "us-east-2",
})
const s3 = new aws.S3()
fs.readFile("cover.jpg", (error, image) => {
if (error) throw error
const params = {
Bucket: process.env.BUCKET_NAME,
Key: "cover.jpg",
Body: image,
ACL: "public-read",
ContentType: "image/jpg",
}
s3.upload(params, (error, res) => {
if (error) throw error
console.log(`${JSON.stringify(res)}`)
})
})
} catch (error) {
infoLogger.error(`Error reading cover file: ${JSON.stringify(error)}`)
}
}
module.exports = uploadCoverImage
I found out that it was uploading before the image had finished downloading via fs.createReadStream() in a different part of my codebase which is why it was partially loaded in S3. I never noticed because I only ever saw the fully loaded image in my local file system.
I am trying to upload an image through react app to s3 bucket and then receiving back the URL and showing the image on the screen.
I am able to upload the image (sort of) and get the URL back from the s3 server, but when I download it I am unable to open it - the formant is unsupported and I can't use the img tag to show it on the webpage. I guess that it is something to do with conversation to base64 but I can't figure out why it is not working.
The frontend(React) is:
const uploadImageToBucket = async (image) => {
console.log("fff",image)
let image_location
try {
const response = axios.post("http://localhost:5000/user/blogmanage/uploadimage",image)
image_location = response.then((response)=>response.data.body);
console.log("img loc", image_location)
return image_location;
} catch (error) {
}
}
The backend(nodejs) is
router.post("/blogmanage/uploadimage", async (req,res)=>{
const s3 = new AWS.S3({
accessKeyId: process.env["AWS_ACCESS_KEY_ID"],
secretAccessKey: process.env["AWS_SECRET_KEY"],
region: process.env['AWS_REGION']
});
const BUCKET_NAME = "mrandmrseatmedia";
var base64data = new Buffer.from( 'binary',req.body);
const params = {
Bucket: BUCKET_NAME,
Key: "test/test2.jpg",
Body: base64data
}
s3.upload(params, function (err,data){
if (err){
console.log(err)
res.status(404).json({msg:err});
}
else{
const image_location = `${data.Location}`;
console.log(`File uploaded successfully. ${data.Location}`);
res.status(200).json({body:image_location});
}
})
});
Thanks!
After a lot of testing and retesting and rewriting using this repo as an example
https://github.com/Jerga99/bwm-ng/blob/master/server/services/image-upload.js
It works.
The use of base64 is wrong in this case. It corrupts the file in some way. Multer library fixes it.
Good evening
I have this task. I have to upload an image to the S3 bucket using Node JS and generates a thumbnail on the go and not by using a lambda trigger. Everything should be done on my local machine terminal (or) in the local server(postman). I tried this code.
const fs = require('fs');
const ACESS_ID = 'A**********KV';
const SECRET_ID = 'G***********0';
const BUCKET_NAME = 'node-image-bucket';
// Initializing s3 interface
const s3 = new AWS.S3({
accessKeyId: ACESS_ID,
secretAccessKey: SECRET_ID,
});
// File reading function to S3
const uploadFile = (fileName) => {
// Read content from the file
const fileContent = fs.readFileSync(fileName);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
Key: 'scene2.jpg',
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data){
if(err){
throw err;
}
console.log(data);
console.log(`File uploaded Successfully. ${data.Location}`);
});
};
uploadFile('./images/bg-hd.jpg');
Above code is working fine with a single image and the problem is every time I upload a file to the S3 bucket I need to change the S3 params key string value
I want to upload multiple images at once and creating a buffer for performance and it should create thumbnails automatically in the same bucket at the different folder.
Could anyone help me, guys! Please Any help Appreciated!!!
You cannot upload multiple files with one s3 operation but you can use the sharp module before uploading https://www.npmjs.com/package/sharp
to resize your image before calling the s3 api.
import * as sharp from 'sharp';
async function resize(buffer , width, height) {
return sharp(buffer).resize(width, height).toBuffer();
}
const thumbnailWidthSize = 200;
const thumbnailWidthHeight = 200;
const thumbnailImage = await resize(fileContent, thumbnailWidthSize, thumbnailWidthHeight)
You can then reuse your current upload function and run it as many times as many image resizes you need with different keys and wrap those calls around promise.all to make the operation fail if any of the upload fails.
await promise.all([
s3upload(image, imageKey),
s3upload(thumbnailImage, thumbnailImageKey)
])
So, there are two parts to your questions -
Converting the image to thumbnail on the fly while uploading to s3 bucket -
You can use the thumbd npm module and create a thumbd server.
Thumbd is an image thumbnailing server built on top of Node.js, SQS, S3, and ImageMagick.
Prerequistes for the thumbd server -
Thumbd requires the following environment variables to be set:
AWS_KEY the key for your AWS account (the IAM user must have access to the appropriate SQS and S3 resources).
AWS_SECRET the AWS secret key.
BUCKET the bucket to download the original images from. The thumbnails will also be placed in this bucket
AWS_REGION the AWS Region of the bucket. Defaults to: us-east-1.
CONVERT_COMMAND the ImageMagick convert command. Defaults to convert.
REQUEST_TIMEOUT how long to wait in milliseconds before aborting a remote request. Defaults to 15000.
S3_ACL the acl to set on the uploaded images. Must be one of private, or public-read. Defaults to private.
S3_STORAGE_CLASS the storage class for the uploaded images. Must be either STANDARD or REDUCED_REDUNDANCY. Defaults to STANDARD.
SQS_QUEUE the queue name to listen for image thumbnailing.
When running locally, I set these environment variables in a .env file and execute thumbd using pm2/forever/foreman.
Setup -
apt-get install imagemagick
npm install thumbd -g
thumbd install
thumbd start // Run thumbd as a server
After the thumbd server is up and running, refer the code below to change image to thumbnail while uploading to s3 bucket.
var aws = require('aws-sdk');
var url = require("url");
var awsS3Config = {
accessKeyId: ACESS_ID,
secretAccessKey: SECRET_ID,
region: 'us-west-2'
}
var BUCKET_NAME = 'node-image-bucket';
var sourceImageDirectory = "/tmp/"
var imageUploadDir = "/thumbnails/"
var imageName = 'image.jpg'
var uploadImageName = 'image.jpg'
aws.config.update(awsS3Config);
var s3 = new aws.S3();
var Client = require('thumbd').Client,
client = new Client({
awsKey: awsS3Config.accessKeyId,
awsSecret: awsS3Config.secretAccessKey,
s3Bucket: BUCKET_NAME,
sqsQueue: 'ThumbnailCreator',
awsRegion: awsS3Config.region,
s3Acl: 'public-read'
});
export function uploadAndResize(sourceImageDirectory, imageName, imageUploadDir, uploadImageName) {
return new Promise((resolve, reject)=>{
client.upload(sourceImageDirectory + imageName, imageUploadDir + uploadImageName, function(err) {
if (err) {
reject(err);
} else {
client.thumbnail(imageUploadDir + uploadImageName, [{
"suffix": "medium",
"width": 360,
"height": 360,
"background": "white",
"strategy": "%(command)s %(localPaths[0])s -resize %(width)sX%(height)s^ -gravity north -extent %(width)sX%(height)s %(convertedPath)s"
}, {
"suffix": "thumb",
"width": 100,
"height": 100,
"background": "white",
"strategy": "%(command)s %(localPaths[0])s -resize %(width)sX%(height)s^ -gravity north -extent %(width)sX%(height)s %(convertedPath)s"
}], {
//notify: 'https://callback.example.com'
});
var response = {};
//https://s3-ap-us-west-2.amazonaws.com/node-image-bucket/1/5825c7d0-127f-4dac-b802-ca24efba2bcd-original.jpeg
response.url = 'https://s3-' + awsS3Config.region + '.amazonaws.com/' + BUCKET_NAME + '/' + imageUploadDir;
response.uploadImageName = uploadImageName;
response.sourceImageName = imageName;
resolve(response);
}
})
})
Second, you wanted to upload multiple images without changing the string -
Loop over the below method for all the files in a localpath and you are good to go.
export function uploadFiles(localPath, localFileName, fileUploadDir, uploadFileName) {
return new Promise((resolve, reject) => {
fs.readFile(localPath + localFileName, function (err, file) {
if (err) {
reject(err);
}
var params = {
ACL: 'public-read',
Bucket: BUCKET_NAME,
Key: uploadFileName,
Body: file
};
s3.upload(params, function (err, data) {
fs.unlink(localPath + localFileName, function (err) {
if (err) {
reject(err);
} else {
resolve(data)
}
});
});
});
})
}
I'm trying to upload files from a MERN application I'm working on. I'm almost done with the NodeJS back end part.
Said application will allow users to upload images(jpg, jpeg, png, gifs, etc) to an Amazon AWS S3 bucket that I created.
Well, lets put it this way. I created a helper:
const aws = require('aws-sdk');
const fs = require('fs');
// Enter copied or downloaded access ID and secret key here
const ID = process.env.AWS_ACCESS_KEY_ID;
const SECRET = process.env.AWS_SECRET_ACCESS_KEY;
// The name of the bucket that you have created
const BUCKET_NAME = process.env.AWS_BUCKET_NAME;
const s3 = new aws.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
const uploadFile = async images => {
// Read content from the file
const fileContent = fs.readFileSync(images);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
// Key: 'cat.jpg', // File name you want to save as in S3
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};
module.exports = uploadFile;
That helper takes three of my environment variables which are the name of the bucket, the keyId and the secret key.
When adding files from the form(that will eventually be added in the front end) the user will be able to send more than one file.
Right now my current post route looks exactly like this:
req.body.user = req.user.id;
req.body.images = req.body.images.split(',').map(image => image.trim());
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
That right there works great but takes the req.body.images as a string with each image separated by a comma. What would the right approach be to upload(to AWS S3) the many files selected from the Windows directory pop up?. I tried doing this but did not work :/
// Add user to req,body
req.body.user = req.user.id;
uploadFile(req.body.images);
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
Thanks and hopefully your guys can help me out with this one. Right now I'm testing it with Postman but later on the files will be sent via a form.
Well you could just call the uploadFile multiple times for each file :
try{
const promises= []
for(const img of images) {
promises.push(uploadFile(img))
}
await Promise.all(promises)
//rest of logic
}catch(err){ //handle err }
On a side note you should warp S3.upload in a promise:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
})
module.exports = ({ params }) => {
return new Promise((resolve, reject) => {
s3.upload(params, function (s3Err, data) {
if (s3Err) return reject(s3Err)
console.log(`File uploaded successfully at ${data.Location}`)
return resolve(data)
})
})
}
Bonus, if you wish to avoid having your backend handle uploads you can use aws s3 signed urls and let the client browser handle that thus saving your server resources.
One more thing your Post object should only contain Urls of the media not the media itself.
// Setting up S3 upload parameters
const params = {
Bucket: bucket, // bucket name
Key: fileName, // File name you want to save as in S3
Body: Buffer.from(imageStr, 'binary'), //image must be in buffer
ACL: 'public-read', // allow file to be read by anyone
ContentType: 'image/png', // image header for browser to be able to render image
CacheControl: 'max-age=31536000, public' // caching header for browser
};
// Uploading files to the bucket
try {
const result = await s3.upload(params).promise();
return result.Location;
} catch (err) {
console.log('upload error', err);
throw err;
}