I am new to AWS,and trying to figure out how to upload a file using the AWS S3 API (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html), which is incorporated in my own api.
I can create a bucket, and get a list of all the buckets- however I am struggling with a file upload.
This is by code:
router.post('/upload', function(req, res, next) {
var params = {
Bucket: req.body.bucketName,
Key: req.body.key,
Body: req.body.body
}
s3.putObject(params, function(err, data) {
if (err) {
return next(err)
} else {
res.json(data)
}
})
})
So when I run my server, I try to make a post request using postman to localhost:8080/upload with the following: attaching a file, and the key and body - but I think I do this part wrong.
And I also attach the file:
Question is:
Do I correctly understand the following- Bucket = the bucket name I want to upload to, Key = the file name, Body = the file contents?
If yes, how do I get this to upload to S3 bucket, as with the current code I get a file added to s3 called 'text.txt' with the contents 'heello' rather than my 'test.txt' file.
are you trying to upload a file correct? So you should use a multipart/form-data content-type, and in body you can point to your file buffer.
In my case, I use with swagger:
upload: (req, res) => {
const params = {
Bucket: 'bucket-name',
Key: req.swagger.params.file.value.originalname,
ACL: 'public-read',
Body: req.swagger.params.file.value.buffer
};
s3.putObject(params, function(err, data) {
if (err) {
console.log('Error uploading image: ', err);
res.status(500).json().end();
} else {
res.status(200).json('File is uploaded').end();
}
})
}
Related
I'm trying to get an image as a response using the public URL of the file :
var request = require('request');
request('https://bucket-name.s3.amazonaws.com/file-name').pipe(res);
when I send the request this is the response I get
I need to know how can I get an image file as response instead of that.
Here's the upload function that works fine
const fileContent = fs.readFileSync(fileName);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
Key: '30.png', // File name you want to save as in S3
Body: fileContent,
};
// Uploading files to the bucket
s3.upload(params, function (err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};```
i want to
1-choose an image from my filesystem and upload it to server/local
2- get its url back using node js service . i managed to do step 1 and now i want to get the image url instead of getting the success message in res.end
here is my code
app.post("/api/Upload", function(req, res) {
upload(req, res, function(err) {
if (err) {
return res.end("Something went wrong!");
}
return res.end("File uploaded sucessfully!.");
});
});
i'm using multer to upload the image.
You can do something like this, using AWS S3 and it returns the url of the image uploaded
const AWS = require('aws-sdk')
AWS.config.update({
accessKeyId: <AWS_ACCESS_KEY>,
secretAccessKey: <AWS_SECRET>
})
const uploadImage = file => {
const replaceFile = file.data_uri.replace(/^data:image\/\w+;base64,/, '')
const buf = new Buffer(replaceFile, 'base64')
const s3 = new AWS.S3()
s3.upload({
Bucket: <YOUR_BUCKET>,
Key: <NAME_TO_SAVE>,
Body: buf,
ACL: 'public-read'
}, (err, data) => {
if (err) throw err;
return data.Location; // this is the URL
})
}
also you can check this express generator, which has the route to upload images to AWS S3 https://www.npmjs.com/package/speedbe
I am assuming that you are saving the image on the server file system and not a Storage solution like AWS S3 or Google Cloud Storage, where you get the url after upload.
Since, you are storing it on the filesystem, you can rename the file with a unique identifier like uuid or something else.
Then you can make a GET route and request that ID in query or path parameter and then read the file having that ID as the name and send it back.
I'm using NodeJS and Multer to upload files to S3.
On the surface, everything appears to be working, the files get uploaded, and I can see them in the bucket when I log into the AWS console. However, most of the time when I follow the link to the file, the file is broken, often the file size is much smaller than the original file.
When the file reaches the server, the file size is correct if I log it, but on S3 it is much smaller. For example I just uploaded a file which is 151kb. The post request logs the file size correctly, but on S3 the file says it's 81kb.
Client side:
uploadFile = (file) ->
formData = new FormData()
formData.append 'file', file
xhr = new XMLHttpRequest()
xhr.open "POST", "/upload-image", true
# xhr.setRequestHeader("Content-Type","multipart/form-data");
console.log 'uploadFile'
xhr.onerror = ->
alert 'Error uploading file'
xhr.onreadystatechange = ->
if xhr.readyState is 4
console.log xhr.responseText
xhr.send formData
Server:
app.use(multer({ // https://github.com/expressjs/multer
inMemory: true,
limits : { fileSize:3000000 },
rename: function (fieldname, filename) {
var time = new Date().getTime();
return filename.replace(/\W+/g, '-').toLowerCase() + '_' + time;
},
onFileUploadData: function (file, data, req, res) {
var params = {
Bucket: creds.awsBucket,
Key: file.name,
Body: data,
ACL: 'public-read'
};
var s3 = new aws.S3();
s3.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading data: ", perr);
} else {
console.log("Successfully uploaded data", pres);
}
});
}
}));
app.post('/upload-image', function(req, res){
if (req.files.file === undefined){
res.end("error, no file chosen");
} else if (req.files.file.truncated) {
res.end("file too large");
} else {
console.log(req.files.file.size); //logs the correct file size
var path = creds.awsPath + req.files.file.name;
res.type('text/plain');
res.write(path);
res.end();
};
});
EDIT:
Setting file.buffer to the body perma onFileUploadComplete seems to work, but I have a feeling that this isn't the proper way of doing things, and may come back to bite me later. Is this approach okay, or are there issues I should be aware of doing this?
I'm trying to figure out how to upload data to an Amazon S3 bucket via a RESTful API that I'm writing in Node.js/Restify. I think I've got the basic concepts all working, but when I go to connect to the body of my POST request, that's when things go awry. When I set up my callback function to simply pass a string to S3, it works just fine and the file is created in the appropriate S3 bucket:
function postPoint(req, res, next) {
var point = [
{ "x": "0.12" },
{ "y": "0.32" }
];
var params = { Bucket: 'myBucket', Key: 'myKey', Body: JSON.stringify(point) };
s3.client.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading data: ", perr);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
res.send(200);
return next();
}
server.post('/point', postPoint);
Obviously, I need to eventually stream/pipe my request from the body of the request. I assumed all that I needed to do would be to simply switch the body of the params to the request stream:
function postPoint(req, res, next) {
var params = { Bucket: 'myBucket', Key: 'myKey', Body: req };
s3.client.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading data: ", perr);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
res.send(200);
return next();
}
But that ends up causing the following log message to be displayed: "Error uploading data: [TypeError: path must be a string]" which gives me very little indication of what I need to do to fix the error. Ultimately, I want to be able to pipe the result since the data being sent could be quite large (I'm not sure if the previous examples are causing the body to be stored in memory), so I thought that something like this might work:
function postPoint(req, res, next) {
var params = { Bucket: 'myBucket', Key: 'myKey', Body: req };
req.pipe(s3.client.putObject(params));
res.send(200);
return next();
}
Since I've done something similar in a GET function that works just fine:(s3.client.getObject(params).createReadStream().pipe(res);). But that also did not work.
I'm at a bit of a loss at this point so any guidance would be greatly appreciated!
So, I finally discovered the answer after posting on the AWS Developer Forums. It turns out that the Content-Length header was missing from my S3 requests. Loren#AWS summed it up very well:
In order to upload any object to S3, you need to provide a Content-Length. Typically, the SDK can infer the contents from Buffer and String data (or any object with a .length property), and we have special detections for file streams to get file length. Unfortunately, there's no way the SDK can figure out the length of an arbitrary stream, so if you pass something like an HTTP stream, you will need to manually provide the content length yourself.
The suggested solution was to simply pass the content length from the headers of the http.IncomingMessage object:
var params = {
Bucket: 'bucket', Key: 'key', Body: req,
ContentLength: parseInt(req.headers['content-length'], 10)
};
s3.putObject(params, ...);
If anyone is interested in reading the entire thread, you can access it here.
I've tried using aws-sdk and knox and I get status code 301 trying to upload images. I get status code 301 and message - 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. This works in php.
AWS.config.loadFromPath(__dirname + '/config/config.json');
fs.readFile(source, function (err, data) {
var s3 = new AWS.S3();
s3.client.createBucket({Bucket: 'mystuff'}, function() {
var d = {
Bucket: 'mystuff',
Key: 'img/test.jpg',
Body: data,
ACL: 'public-read'
};
s3.client.putObject(d, function(err, res) {
if (err) {
console.log("Error uploading data: ", err);
callback(err);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
callback(res);
}
});
});
});
I actually solved this problem. In your config you have to have a region, since my bucket was "US Standard", I left my region blank and it worked.
config.json -
{ "accessKeyId": "secretKey", "secretAccessKey": "secretAccessKey", "region": ""}
go to s3 management console select one of your files and click on proporties - > look at the file link.
US standard
https://s3.amazonaws.com/yourbucket/
host in your console window
yourbucket.s3.amazonaws.com/
us-west-1
https://s3-us-west-1.amazonaws.com/yourbucket/
host in your console window
yourbucket.s3-us-west-1.amazonaws.com/
Did you try .send()?
I can upload to S3 by below code.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/AWSRequest.html
var s3object = {Bucket: 'mystuff', Key: name, Body : data['data']};
s3.client.putObject(s3object).done(function(resp){
console.log("Successfully uploaded data");
}).fail(function(resp){
console.log(resp);
}).send();
I have the same problem with the new SDK and solved it by setting the endpoint option explicitly.
Reference : http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor_details
Snippet:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({ endpoint :'https://s3-your-region-varies.amazonaws.com' }),
myBucket = 'your-bucket-name';
var params = {Bucket: myBucket, Key: 'myUpload', Body: "Test"};
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
} else {
console.log("Successfully uploaded data to "+myBucket+"/testKeyUpload");
}
});
Alternatively, you can solve this by setting the region in your config file and you just have to be precise of your region name.