I'm having issues with getting the full image back from amazon s3 after sending a base64 string(about 2.43MB when converted to an image).
if I compress this image via https://compressnow.com/, and upload, this works fine and I get the full image.
Is it possible for me to compress the base64 string before sending to Amazon s3?
Here is logic to upload to amazon s3
await bucket
.upload({
Bucket: "test",
Key: "test",
Body: "test",
ContentEncoding: 'base64',
Metadata: { MimeType: "png },
})
Similar issue here Node base64 upload to AWS S3 bucket makes image broken
The ContentEncoding parameter specifies the header that S3 should send along with the HTTP response, not the encoding of the object as far as what is passed to the AWS SDK. According to the documentation the Body parameter is simply the "Object data". In other words, you should probably just drop the ContentEncoding parameter unless you have a specific need for it and pass along raw bytes:
const fs = require('fs');
var AWS = require('aws-sdk');
s3 = new AWS.S3({apiVersion: '2006-03-01'});
// Read the contents of a local file
const buf = fs.readFileSync('source_image.jpg')
// Or, if the contents are base64 encoded, then decode them into buffer of raw data:
// const buf = new Buffer.from(fs.readFileSync('source_image.b64', 'utf-8'), 'base64')
var params = {
Bucket: '-example-bucket-',
Key: "path/to/example.jpg",
ContentType: `image/jpeg`,
ACL: 'public-read',
Body: buf,
ContentLength: buf.length,
};
s3.putObject(params, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
console.log('succesfully uploaded the image!');
}
});
Related
When using createWriteStream, without any error it uploads image to bucket but empty(size-0B).
const uploadImage = async (filePath, fileId) => {
const fileStream = fs.createWriteStream(filePath);
const uploadParams = {
Bucket: bucket,
ACL: "public-read",
Body: fileStream,
Key: filePath,
ContentType: "image/png",
};
console.log(filePath);
const data = await s3.upload(uploadParams).promise();
console.log(data);
return;
};
but when using readFileSync it uploads image correctly.
const uploadImage = async (filePath, fileId) => {
const fileStream = fs.readFileSync(filePath);
const uploadParams = {
Bucket: bucket,
ACL: "public-read",
Body: fileStream,
Key: filePath,
ContentType: "image/png",
};
console.log(filePath);
const data = await s3.upload(uploadParams).promise();
console.log(data);
return;
};
why?
Problem that you have is more logical.
When you use createWriteStream you are creating new file on your file system. And basically you are creating empty file. So when you upload empty file on S3 it will be empty.
On the other hand when you use readFileSync you are reading the file from your file system, in your case picture, and send array of bytes to S3. That array of bytes is not empty but read from file system.
The first solution must be a ReadStream to read file data from path. Use fs.createReadStream(filePath).
Flow: read file from path -> write to S3.
I’m having issues uploading a file from postman to aws lambda + s3. If I understand correctly the image has to be a base64 string and send via JSON to work with lambda and API Gateway so I converted an image to a base64 and I’m using the base64 string in postman
The file uploads to S3, but when I download the s3 object and open it I get
So I don’t think I’m uploading it correctly. I’ve used a base64 to image converter and the image appears so the base64 string is correct before sending it via postman so something in my setup is off. What am I doing wrong? I appreciate the help!
upload.js
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event, context, callback) => {
let data = JSON.parse(event.body);
let file = data.base64String;
const s3Bucket = "upload-test3000";
const objectName = "helloworld.jpg";
const objectData = data.base64String;
const objectType = "image/jpg";
try {
const params = {
Bucket: s3Bucket,
Key: objectName,
Body: objectData,
ContentType: objectType
};
const result = await s3.putObject(params).promise();
return sendRes(200, `File uploaded successfully at https:/` + s3Bucket + `.s3.amazonaws.com/` + objectName);
} catch (error) {
return sendRes(404, error);
}
};
const sendRes = (status, body) => {
var response = {
statusCode: status,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
"Access-Control-Allow-Methods": "OPTIONS,POST,PUT",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Origin": "*",
"X-Requested-With": "*"
},
body: body
};
return response;
};
.png
When building the params you should add content encoding, otherwise you're just uploading the text data:
const params = {
Bucket: s3Bucket,
Key: objectName,
Body: objectData,
ContentType: objectType,
ContentEncoding: 'base64'
};
edit
Okay I have checked the file, I think you might be misunderstanding what will happen when you store the image in base64.
Windows or a browser for that matter can't read a jpg file in base64 (as far as I know), it must be converted first. When you have an image in the browser with a base64 source, the browser handles this conversion on the fly but the base64 data inside the "helloworld.jpg" container is useless in windows without converting it.
There's two options, either convert once it reaches your server then upload directly as utf8 or have a layer in between, converting the image as it's requested.
The problem might be the passing format of image in Body, as it is not getting passed as expected by s3.upload parameters (It says Body key must be Buffer to be passed).
So, The simple solution is pass the Body as buffer, if your file is present in any location in the directory then don't pass it like
// Wrong Way
const params = {
Bucket: 'Your-Buket-Name',
Key: 'abc.png', // destFileName i.e the name of file to be saved in s3 bucket
Body: 'Path-To-File'
}
Now, the problem is the file gets uploaded as Raw Text, which is corrupted format and will not be readable by the OS on downloading.
So, to get it working pass it like
// Correct Way According to aws-sdk library
const fs = require('fs');
const imageData = fs.readFileSync('Path-To-File'); // returns buffer
const params = {
Bucket: 'Your-Buket-Name',
Key: 'abc.png', // destFileName i.e the name of file to be saved in s3 bucket
Body: imageData // image buffer
}
const uploadedFile = await s3.upload(params).promise();
Note: During answer i was using -> "aws-sdk": "^2.1025.0"
Hope this will help you or somebody else. Thanks!
I got it working by adding the base64 string in JSON format like so
and then sent
let decodedImage = Buffer.from(encodedImage, 'base64'); as the Body param.
updated upload.js
const AWS = require('aws-sdk');
var s3 = new AWS.S3();
exports.handler = async (event) => {
let encodedImage = JSON.parse(event.body).base64Data;
let decodedImage = Buffer.from(encodedImage, 'base64');
var filePath = "user-data/" + event.queryStringParameters.username + ".jpg"
var params = {
"Body": decodedImage,
"Bucket": process.env.UploadBucket,
"Key": filePath
};
try {
let uploadOutput = await s3.upload(params).promise();
let response = {
"statusCode": 200,
"body": JSON.stringify(uploadOutput),
"isBase64Encoded": false
};
return response;
}
catch (err) {
let response = {
"statusCode": 500,
"body": JSON.stringify(eerr),
"isBase64Encoded": false
};
return response;
}
};
I found this article to be super helpful
I'm attempting to upload a base64 encoded pdf to S3 with the following code without having to write the file to the filesystem.
const AWS = require('aws-sdk');
exports.putBase64 = async (object_name, buffer, bucket) => {
const params = {
Key: object_name,
Body: buffer,
Bucket: bucket,
ContentEncoding: 'base64',
ContentType: 'application/pdf'
};
const response = await S3.upload(params).promise();
return response;
};
Where buffer is a blank pdf encoded to base64. When attempting to open the file on s3, I get "We can't open this file
Something went wrong." upon attempting to open it.
However, if I write the base64 encoding into a file and THEN upload it, it works.
await fs.writeFileSync(`./somepdf.pdf`, base_64, 'base64');
exports.put = async (object_name, file_location, bucket, content_type) => {
const file_content = fs.readFileSync(file_location);
const params = {
Key: object_name,
Body: './somepdf.pdf',
Bucket: bucket,
ContentType: 'application/pdf'
};
const response = await S3.upload(params).promise();
return response;
};
I notice that when uploading the file directly, the file encoding when viewing the file through a text editor it isn't base64 encoded, but viewing the file uploaded as strictly defined contentencoding base64 shows the base64. I attempted to convert the base64 to a blob using atob but that yielded the same results, so I assume there's a parameter I maybe missing or a header.
I had the same issue and managed to solve it by making this change:
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
exports.putBase64 = async (object_name, buffer, bucket) => {
const params = {
Key: object_name,
Body: Buffer.from(buffer, 'base64'), // <---------
Bucket: bucket,
ContentType: 'application/pdf'
};
return await S3.upload(params).promise();
};
Create a new buffer
const newBuffer = buffer.replace(/^data:.+;base64,/, "")
Now use this new buffer in params. This should work!
I upload an image file using the following format:
var body = fs.createReadStream(tempPath).pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: myBucket, Key: myKey}});
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png'
};
s3obj.upload(params, function(err, data) {
if (err) console.log("An error occurred with S3 fig upload: ", err);
console.log("Uploaded the image file at: ", data.Location);
});
The image successfully uploads to my S3 bucket (there are no error messages and I see it in the S3-console), but when I try to display it on my website, it returns a broken img icon. When I download the image using the S3-console file downloader I am unable to open it with the error that the file is "damaged or corrupted".
If I upload a file manually using the S3-console, I can correctly display it on my website, so I'm pretty sure there's something wrong with how I'm uploading.
What is going wrong?
I eventually found the answer to my question. I needed to post one more parameter because the file is gzip'd (from using var body = ...zlib.createGzip()). This fixed my problem:
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png',
ContentEncoding: 'gzip'
};
Theres a very nice node module s3-upload-stream to upload (and first compress) images to S3, here's their example code which is very well documented:
var AWS = require('aws-sdk'),
zlib = require('zlib'),
fs = require('fs');
s3Stream = require('s3-upload-stream')(new AWS.S3()),
// Set the client to be used for the upload.
AWS.config.loadFromPath('./config.json');
// or do AWS.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
// Create the streams
var read = fs.createReadStream('/path/to/a/file');
var compress = zlib.createGzip();
var upload = s3Stream.upload({
"Bucket": "bucket-name",
"Key": "key-name"
});
// Optional configuration
upload.maxPartSize(20971520); // 20 MB
upload.concurrentParts(5);
// Handle errors.
upload.on('error', function (error) {
console.log(error);
});
/* Handle progress. Example details object:
{ ETag: '"f9ef956c83756a80ad62f54ae5e7d34b"',
PartNumber: 5,
receivedSize: 29671068,
uploadedSize: 29671068 }
*/
upload.on('part', function (details) {
console.log(details);
});
/* Handle upload completion. Example details object:
{ Location: 'https://bucketName.s3.amazonaws.com/filename.ext',
Bucket: 'bucketName',
Key: 'filename.ext',
ETag: '"bf2acbedf84207d696c8da7dbb205b9f-5"' }
*/
upload.on('uploaded', function (details) {
console.log(details);
});
// Pipe the incoming filestream through compression, and up to S3.
read.pipe(compress).pipe(upload);
I'm trying to save a PDF file into S3 with the AWS SDK.
I'm getting the PDF through the body of a POST request (Application/PDF).
When saving the file into the local HD with fs.writeFile, the file looks ok. But when uploading it to S3, the file is corrupted (it's just a single
page PDF).
Any help or hint would be greatly appreciated!
var data = body // body from a POST request.
var fileName = "test.pdf";
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
console.log('saved'); // File is OK!
});
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: data }, function(err, data) {
console.log('uploaded') // File uploads incorrectly.
});
EDIT:
It works if I write and then read the file and then upload it.
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
fs.readFile(fileName, function(err, fileData) {
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: fileData }, function(err, data) {
console.log('uploaded') // File uploads correctly.
});
});
});
Try setting the contentType and/or ContentEncoding on your put to S3.
ContentType: 'binary', ContentEncoding: 'utf8'
See the code sample here for working example putObject makes object larger on server in Nodejs
I think it is because the data is consumed (i.e. a stream).
It would explain why after writting the data you send nothing to S3 and reading again the data you can send a valid PDF.
Try and see if it works by just sending the data directly to S3 without writting it to disk.
Yes, you forgot about callback of writeFile function, so when you started uploading to Amazon S3 your file wasn't saved completly. You shouldn't forget that node.js is asynchronous and an app won't wait when the fs.writeFile finishes it work, it simply run s3.putObject the same time.
/**
* JS library: Promise.promisify from bluebirdjs
**/
My code is as below
global.Promise = require('bluebird');
const aws = require('aws-sdk');
const aswAccessKey = {
accessKeyId: 'your-accesskey-id',
secretAccessKey: 'your-secret-access-key'
};
const fs = require('fs');
const path = require('path');
const uuidV4 = require('uuid/v4');
// Create S3 service object
// available apiVersion: '2006-03-01', '2013-04-01',
const s3 = new aws.S3(Object.assign(aswAccessKey, {
apiVersion: '2013-04-01'
}));
function putObject(bucketName, file) {
console.log('putObject into ', bucketName);
/**
* If we don't use versioned bucket, we must not pass VersionId
*/
const params = {
Bucket: bucketName,
Key: '',
Body: 'Plain text',
ACL: 'public-read',
ContentType: 'binary',
CacheControl: 'max-age=172800'
};
return Promise
.promisify(fs.readFile, {
context: fs
})(file)
.then((fileData) => {
console.log(fileData);
params.Body = fileData;
params.Key = 'g01/' + uuidV4() + '-' + path.basename(file);
return Promise
.promisify(s3.putObject, {
context: s3
})(params)
.then((data) => {
console.log('successful');
console.log(data);
})
.catch((err) => {
console.log('Error', err);
});
})
.catch(() => {
});
}