I am trying to upload a image, for which i have a url into s3
I want to do the same without downloading the image to local storage
filePath = imageURL;
let params = {
Bucket: 'bucketname',
Body: fs.createReadStream(filePath),
Key: "folder/" + id + "originalImage"
};
s3.upload(params, function (err, data) {
if (err) console.log(err);
if (data) console.log("original image success");
});
expecting a success but getting error :
myURL is a https publicly accesible url.
[Error: ENOENT: no such file or directory, open '<myURL HERE>']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path:
'<myURL HERE>' }
There are two ways to place files into an Amazon S3 bucket:
Upload the contents via PutObject(), or
Copy an object that is already in Amazon S3
So, in the case of your "web accessible Cloudinary image", it will need to be retrieved, then uploaded. You could either fully download the image and then upload it, or you could do some fancy stuff with streaming bodies where the source image is read into a buffer and then written to S3. Either way, the file will need to be "read" from Cloudinary the "written" to S3.
As for the "web accessible Amazon S3 link", you could use the CopyObject() command to instruct S3 to directly copy the object from the source bucket to the destination bucket. (It also works within a bucket.)
This will make a GET request to the image URL, and pipe the response directly to the S3 upload stream, without the need to save the image to the local file system.
const request = require('request');
const fs = require('fs');
const imageURL = 'https://example.com/image.jpg';
const s3 = new AWS.S3();
const params = {
Bucket: 'bucketname',
Key: "folder/" + id + "originalImage"
};
const readStream = request(imageURL);
const uploadStream = s3.upload(params).createReadStream();
readStream.pipe(uploadStream);
You can use following snippet to upload files in S3 using the file URL.
import axios from 'axios';
import awsSDK from 'aws-sdk';
const uploadImageToS3 = () => {
const url = 'www.abc.com/img.jpg';
axios({
method: 'get',
url: url,
responseType: 'arraybuffer',
})
.then(function (response) {
console.log('res', response.data);
const arrayBuffer = response.data;
if (arrayBuffer) {
awsSDK.config.update({
accessKeyId: 'aws_access_key_id',
secretAccessKey: 'aws_secret_access_key',
region: 'aws_region',
});
const s3Bucket = new awsSDK.S3({
apiVersion: '2006-03-01',
params: {
Bucket: 'aws_bucket_name'
}
});
const data = {
Body: arrayBuffer,
Key: 'unique_key.fileExtension',
ACL: 'public-read'
};
const upload = s3Bucket.upload(data, (err, res) => {
if (err) {
console.log('s3 err:', err)
}
if (res) {
console.log('s3 res:', res)
}
})
}
});
}
uploadImageToS3();
Related
When using createWriteStream, without any error it uploads image to bucket but empty(size-0B).
const uploadImage = async (filePath, fileId) => {
const fileStream = fs.createWriteStream(filePath);
const uploadParams = {
Bucket: bucket,
ACL: "public-read",
Body: fileStream,
Key: filePath,
ContentType: "image/png",
};
console.log(filePath);
const data = await s3.upload(uploadParams).promise();
console.log(data);
return;
};
but when using readFileSync it uploads image correctly.
const uploadImage = async (filePath, fileId) => {
const fileStream = fs.readFileSync(filePath);
const uploadParams = {
Bucket: bucket,
ACL: "public-read",
Body: fileStream,
Key: filePath,
ContentType: "image/png",
};
console.log(filePath);
const data = await s3.upload(uploadParams).promise();
console.log(data);
return;
};
why?
Problem that you have is more logical.
When you use createWriteStream you are creating new file on your file system. And basically you are creating empty file. So when you upload empty file on S3 it will be empty.
On the other hand when you use readFileSync you are reading the file from your file system, in your case picture, and send array of bytes to S3. That array of bytes is not empty but read from file system.
The first solution must be a ReadStream to read file data from path. Use fs.createReadStream(filePath).
Flow: read file from path -> write to S3.
I am sending image data from my react native application to my node js backend which i want to upload to S3 . I want to know exactly which format i must change the data to in order to upload it to my S3 . Below is the formdata which i am logging in my backend at the moment .
[
'file',
{
uri: 'file:///var/mobile/Containers/Data/Application/CA974BC6-6943-4135-89DE-235BC593A54F/Library/Caches/ExponentExperienceData/%2540lb2020%252Fmy/ImagePicker/D7119C77-60D0-46CC-A194-4F1FDE0D9A3D.jpg',
type: 'image/jpeg',
name: 'hi.jpg'
}
]
My backend has this code below also . Would making the above code equal file work ? if not , suggestions will be appreciated .
const params = {
Bucket:"myarrowbucket", // bucket you want to upload to
Key: "filename"+".png",
Body: file,
ContentType:'image/png',
ACL: "public-read",
};
I have tried uploading and the image doesnt open correctly on S3 or gives me Error: Unsupported body payload object
Updated code - - no path found error
app.post("/upload", async (req, res) => {
const uri = (req.body._parts[0][1].uri)
const file = uri.substring(7);
const fileStream = fs.createReadStream(file);
const params = {
Bucket:"myarrowbucket", // bucket you want to upload to
Key: "filename"+".png",
Body: fileStream,
ContentType:'image/png',
ACL: "public-read",
};
const data = await client.upload(params).promise();
return data.Location; // returns the url location
});
I have tried uploading and the image doesnt open correctly on S3 or gives me > Error: Unsupported body payload object
You need to provide a stream to the S3 client.
app.post("/upload", fileUpload(), async (req, res) => {
const uri = (req.body._parts[0][1].uri)
const file = uri.substring(7);
const params = {
Bucket:"myarrowbucket", // bucket you want to upload to
Key: "filename"+".png",
Body: Buffer.from(req.files[0].data, 'binary'), <-- PROVIDE DATA FROM FORM-DATA
ACL: "public-read",
};
const data = await client.upload(params).promise();
return data.Location; // returns the url location
});
You can use a library like form-data to handle the form data conversion.
I have upload an image using node.js in AWS S3 and that have successfully uploaded on AWS S3 bucket, but I try to view the image instead of download. I cant view that downloded file. I have used following code to upload an image on AWS S3:
var AWS = require('aws-sdk');
var config = require('../../server/config');
AWS.config.update({
accessKeyId: config.aws.accessKeyId,
secretAccessKey: config.aws.secretAccessKey,
region: config.aws.region
});
var s3 = new AWS.S3();
var Busboy = require('busboy');
var busboyBodyParser = require('busboy-body-parser');
app.use(busboyBodyParser());
app.post('/upload', function(req,res){
var directory = req.body.directory;
console.log(req.files.file);
var image = req.files.file.name;
var contenttype = req.files.file.mimetype;
if(req.body.directory) {
var file = directory+'/'+image;
} else {
var file = image;
}
var data = req.files.file.data;
var keys = {
Bucket: req.body.bucket,
Key: file,
Body: data,
ACL: 'public-read',
contentType: contenttype
};
s3.upload(keys, function(err, result) {
if (err) {
res.send({
isError:true,
status:400,
message:"File Not Uplaod",
data:err
});
} else {
var data = {
Location: result.Location,
key:result.key,
Bucket:result.Bucket
};
res.send({
isError:false,
status:200,
message:"File Uplaod",
data:data
});
}
});
});
I was stuck with this as well, but the following works:
let params = {
ACL: 'public-read',
Bucket: process.env.BUCKET_NAME,
Body: fs.createReadStream(req.file.path),
ContentType: req.file.mimetype,
Key: `avatar/${req.file.originalname}`
};
req.file.mimetype is what fixed it, which is basically the same as ContentType: image/jpeg but it identifies the extension of the file the user has uploaded as opposed to having to hardcode image/jpeg or image/png
I hope your issue is fixed though.
I have find that answer :
used ContentType:'image/jpeg' or ContentType: 'your variable' in keys to upload image in aws s3
I upload an image file using the following format:
var body = fs.createReadStream(tempPath).pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: myBucket, Key: myKey}});
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png'
};
s3obj.upload(params, function(err, data) {
if (err) console.log("An error occurred with S3 fig upload: ", err);
console.log("Uploaded the image file at: ", data.Location);
});
The image successfully uploads to my S3 bucket (there are no error messages and I see it in the S3-console), but when I try to display it on my website, it returns a broken img icon. When I download the image using the S3-console file downloader I am unable to open it with the error that the file is "damaged or corrupted".
If I upload a file manually using the S3-console, I can correctly display it on my website, so I'm pretty sure there's something wrong with how I'm uploading.
What is going wrong?
I eventually found the answer to my question. I needed to post one more parameter because the file is gzip'd (from using var body = ...zlib.createGzip()). This fixed my problem:
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png',
ContentEncoding: 'gzip'
};
Theres a very nice node module s3-upload-stream to upload (and first compress) images to S3, here's their example code which is very well documented:
var AWS = require('aws-sdk'),
zlib = require('zlib'),
fs = require('fs');
s3Stream = require('s3-upload-stream')(new AWS.S3()),
// Set the client to be used for the upload.
AWS.config.loadFromPath('./config.json');
// or do AWS.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
// Create the streams
var read = fs.createReadStream('/path/to/a/file');
var compress = zlib.createGzip();
var upload = s3Stream.upload({
"Bucket": "bucket-name",
"Key": "key-name"
});
// Optional configuration
upload.maxPartSize(20971520); // 20 MB
upload.concurrentParts(5);
// Handle errors.
upload.on('error', function (error) {
console.log(error);
});
/* Handle progress. Example details object:
{ ETag: '"f9ef956c83756a80ad62f54ae5e7d34b"',
PartNumber: 5,
receivedSize: 29671068,
uploadedSize: 29671068 }
*/
upload.on('part', function (details) {
console.log(details);
});
/* Handle upload completion. Example details object:
{ Location: 'https://bucketName.s3.amazonaws.com/filename.ext',
Bucket: 'bucketName',
Key: 'filename.ext',
ETag: '"bf2acbedf84207d696c8da7dbb205b9f-5"' }
*/
upload.on('uploaded', function (details) {
console.log(details);
});
// Pipe the incoming filestream through compression, and up to S3.
read.pipe(compress).pipe(upload);
I'm trying to save a PDF file into S3 with the AWS SDK.
I'm getting the PDF through the body of a POST request (Application/PDF).
When saving the file into the local HD with fs.writeFile, the file looks ok. But when uploading it to S3, the file is corrupted (it's just a single
page PDF).
Any help or hint would be greatly appreciated!
var data = body // body from a POST request.
var fileName = "test.pdf";
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
console.log('saved'); // File is OK!
});
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: data }, function(err, data) {
console.log('uploaded') // File uploads incorrectly.
});
EDIT:
It works if I write and then read the file and then upload it.
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
fs.readFile(fileName, function(err, fileData) {
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: fileData }, function(err, data) {
console.log('uploaded') // File uploads correctly.
});
});
});
Try setting the contentType and/or ContentEncoding on your put to S3.
ContentType: 'binary', ContentEncoding: 'utf8'
See the code sample here for working example putObject makes object larger on server in Nodejs
I think it is because the data is consumed (i.e. a stream).
It would explain why after writting the data you send nothing to S3 and reading again the data you can send a valid PDF.
Try and see if it works by just sending the data directly to S3 without writting it to disk.
Yes, you forgot about callback of writeFile function, so when you started uploading to Amazon S3 your file wasn't saved completly. You shouldn't forget that node.js is asynchronous and an app won't wait when the fs.writeFile finishes it work, it simply run s3.putObject the same time.
/**
* JS library: Promise.promisify from bluebirdjs
**/
My code is as below
global.Promise = require('bluebird');
const aws = require('aws-sdk');
const aswAccessKey = {
accessKeyId: 'your-accesskey-id',
secretAccessKey: 'your-secret-access-key'
};
const fs = require('fs');
const path = require('path');
const uuidV4 = require('uuid/v4');
// Create S3 service object
// available apiVersion: '2006-03-01', '2013-04-01',
const s3 = new aws.S3(Object.assign(aswAccessKey, {
apiVersion: '2013-04-01'
}));
function putObject(bucketName, file) {
console.log('putObject into ', bucketName);
/**
* If we don't use versioned bucket, we must not pass VersionId
*/
const params = {
Bucket: bucketName,
Key: '',
Body: 'Plain text',
ACL: 'public-read',
ContentType: 'binary',
CacheControl: 'max-age=172800'
};
return Promise
.promisify(fs.readFile, {
context: fs
})(file)
.then((fileData) => {
console.log(fileData);
params.Body = fileData;
params.Key = 'g01/' + uuidV4() + '-' + path.basename(file);
return Promise
.promisify(s3.putObject, {
context: s3
})(params)
.then((data) => {
console.log('successful');
console.log(data);
})
.catch((err) => {
console.log('Error', err);
});
})
.catch(() => {
});
}