Saving file to /tmp in lambda from inside of a folder - node.js

I have a task of downloading and uploading files to s3 using lambda, the scenerio is like
Download a file from s3 bucket1(request folder) to lambda
Upload the same file to s3 bucket2(request folder) from lambda
Both the downloadFiles and uploadFiles fn are inside utils/s3.js inside the root directory(var/task/) in lambda
Here is my utils/s3.js downloadFiles fn
exports.downloadFiles = async () => {
try{
const location = path.join( __dirname , `../tmp/text.txt`);
console.log(location); // prints /var/task/tmp/text.txt
console.log(__dirname); // prints /var/task/utils
const params = {
Bucket: 'bucket1',
Key: `request/text.txt`
};
const { Body } = await s3.getObject(params).promise();
fs.writeFileSync(location, Body);
return;
}catch(e){
throw new Error(e.message);
}
};
Now there are two cases,
If I create a folder in the root directory tmp, it gives this error
"EROFS: read-only file system, open '/var/task/tmp/text.txt'"
If I don't then
"ENOENT: no such file or directory, open '/var/task/tmp/text.txt'"
Now I have read most of the answeres on stackoverflow, I know I am supposed to save files to /tmp/filename, but how come I do the same and it doesn't work, where am I going so wrong?

As one commenter already stated, if you do not do anything with the file itself, it would be much better to just use the S3 API to copy the object instead of downloading and re-uploading it.
The relevant documentation can be found here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property
Example:
var params = {
CopySource: "/<source-bucket>/<source-key>",
Bucket: "<destination-bucket>",
Key: "<destination-key>"
};
s3.copyObject(params, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
Or if you want to use a promise, this should work as well:
var params = {
CopySource: "/<source-bucket>/<source-key>",
Bucket: "<destination-bucket>",
Key: "<destination-key>"
};
try {
const result = await s3.copyObject(params).promise();
} catch (error) {
console.log(error);
}

Related

Alternative to AWS' bucket.putObject() method in google-cloud/storage

I would like to mimic the following AWS call using the google-cloud/storage package
const params = {
Body: data,
Key: key,
ContentType: type
};
return new Promise(function (resolve, reject) {
bucket.putObject(params, function(error, data) {
if (error) {
console.log('ERROR: ', error);
reject(error);
}
resolve(data);
});
})
In the above call, if I pass some directory hierarchy in the Key param, the folder structure would be created and the file correctly placed.
For instance, if I pass the Key as
root/test_folder/input_file.json
Then the file would be placed as
S3:///root/test_folder/input_file.json
I am unable to find a similar call in google-cloud/storage.
If I use the
<bucket>.upload()
method, I can place the file under a directory, but I can ONLY upload files!
await storage.bucket(bucketName).upload(filename, {
destination: 'abc/xyz',
If I use the
file.save()
method, I can put data into storage, but now I cannot put this under a specific directory!
await file.save(contents);
I need some way of putting content into a directory structure in google-storage and the directory structure may not exist.
Sorry I was wrong. This could simply be done with the file.save() method.
We just need to specify the path along with the filename .
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const myBucket = storage.bucket('bucket');
const file = myBucket.file('xxx/yyy/my-file', { generation: 0 });
const contents = 'This is the contents of the file.';
file.save(contents, function(err) {
if (err) {
file.deleteResumableCache();
}
});
The above would store the file under
bucket/xxx/yyy

Why is the file created from S3 stream empty?

I am trying to access a file in a private S3 bucket from a lambda function identified by Cognito.
Reading the stream works outside a lambda but not inside a lambda
Creating a pre-signed url works inside a lambda
Waiting for the the content to be ready as a string works inside a lambda
I've managed to get a pre-signed url to download the file. Using the same parameters, I've tried to write the read stream to a local file. A file gets created but it's empty. I couldn't catch any error in the process.
const s3 = new AWS.S3({ apiVersion: 'latest' });
const file = 's3Filename.csv'
const userId = event.requestContext.identity.cognitoIdentityId;
const s3Params = {
Bucket: 'MY_BUCKET',
Key: `private/${userId}/${file}`,
};
var fileStream = require('fs').createWriteStream('/path/to/my/file.csv');
var s3Stream = s3.getObject(s3Params).createReadStream();
// Try to print s3 stream errors
s3Stream
.on('error', function (err) {
console.error(err); // prints nothing
});
// Try to print fs errors
s3Stream
.pipe(fileStream)
.on('error', function (err) {
console.error('File Stream:', err); // prints nothing
})
.on('data', function (chunk) {
console.log(chunk); // prints nothing
})
.on('end', function () {
console.log('All the data in the file has been read'); // prints nothing
})
.on('close', function (err) {
console.log('Stream has been Closed'); // prints nothing
});
I am quite confident that my parameters are correct because I can get a pre-signed url that allows me to download the file.
console.log(s3.getSignedUrl('getObject', s3Params));
I can also read the file content using getObject().promise(). This could work but I'm parsing a CSV file and I'd rather go easy on the memory and parse the stream.
try
{
const s3Response = await s3.getObject(s3Params).promise();
let objectData = s3Response.Body.toString('utf-8');
console.log(objectData);
}
catch (ex)
{
console.error(ex);
}
Why is the file created from S3 stream empty? And why is there nothing that prints?
Could it be an access policy issue? If that's the case, why didn't I get any error when executing?

Download image from S3 bucket to Lambda temp folder (Node.js)

Good day guys.
I have a simple question: How do I download an image from a S3 bucket to Lambda function temp folder for processing? Basically, I need to attach it to an email (this I can do when testing locally).
I have tried:
s3.download_file(bucket, key, '/tmp/image.png')
as well as (not sure which parameters will help me get the job done):
s3.getObject(params, (err, data) => {
if (err) {
console.log(err);
const message = `Error getting object ${key} from bucket ${bucket}.`;
console.log(message);
callback(message);
} else {
console.log('CONTENT TYPE:', data.ContentType);
callback(null, data.ContentType);
}
});
Like I said, simple question, which for some reason I can't find a solution for.
Thanks!
You can get the image using the aws s3 api, then write it to the tmp folder using fs.
var params = { Bucket: "BUCKET_NAME", Key: "OBJECT_KEY" };
s3.getObject(params, function(err, data){ if (err) {
console.error(err.code, "-", err.message);
return callback(err); }
fs.writeFile('/tmp/filename', data.Body, function(err){
if(err)
console.log(err.code, "-", err.message);
return callback(err);
});
});
Out of curiousity, why do you need to write the file in order to attach it? It seems kind of redundant to write the file to disk so that you can then read it from disk
If you're writing it straight to the filesystem you can also do it with streams. It may be a little faster/more memory friendly, especially in a memory-constrained environment like Lambda.
var fs = require('fs');
var path = require('path');
var params = {
Bucket: "mybucket",
Key: "image.png"
};
var tempFileName = path.join('/tmp', 'downloadedimage.png');
var tempFile = fs.createWriteStream(tempFileName);
s3.getObject(params).createReadStream().pipe(tempFile);
// Using NodeJS version 10.0 or later and promises
const fsPromise = require('fs').promises;
try {
const params = {
Bucket: 's3Bucket',
Key: 'file.txt',
};
const data = await s3.getObject(params).promise();
await fsPromise.writeFile('/tmp/file.txt', data.Body);
} catch(err) {
console.log(err);
}
I was having the same problem, and the issue was that I was using Runtime.NODEJS_12_X in my AWS lambda.
When I switched over to NODEJS_14_X it started working for me :').
Also
The /tmp is required. It will directly write to /tmp/file.ext.

Listing all the directories and all the files and uploading them to my bucket (S3 Amazon) with Node.JS

Code below:
I'm using the findit walker, documentation here -> https://github.com/substack/node-findit
With this package i'm listing all the directories and files of my application, and i'm trying to send to my bucket on Amazon S3 (with my own code).
I'm not sure if the code is right, and i don't know what i need to put in the Body, inside the params object.
This part it's listening all the Directories of my app:
finder.on('directory', function (dir, stat, stop) {
var base = path.basename(dir);
if (base === '.git' || base === 'node_modules' || base === 'bower_components') {
stop();
}
else {
console.log(dir + '/');
}
});
And this one it's listening all the files of my app:
finder.on('file', function (file, stat) {
console.log(file);
});
I updated it to send data to my bucket, like this:
finder.on('file', function (file, stat) {
console.log(file);
var params = {
Bucket: BUCKET_NAME,
Key: file,
//Body:
};
//console.log(params.body);
s3.putObject(params, function(err) {
if(err) {
console.log(err);
}
else {
console.log("Success!");
}
});
});
I really don't know what i need to put inside the Body, and i don't know if the code is right. Anyone could help me?
Thanks.
to help, all code, all the code:
var fs = require('fs');
var finder = require('findit')(process.argv[2] || '.');
var path = require('path');
var aws = require('aws-sdk');
var s3 = new aws.S3();
aws.config.loadFromPath('./AwsConfig.json');
var BUCKET_NAME = 'test-dev-2';
finder.on('directory', function (dir, stat, stop) {
var base = path.basename(dir);
if (base === '.git' || base === 'node_modules' || base === 'bower_components') {
stop();
}
else {
console.log(dir + '/');
}
});
finder.on('file', function (file, stat) {
console.log(file);
var params = {
Bucket: BUCKET_NAME,
Key: file,
//Body:
};
//console.log(params.body);
s3.putObject(params, function(err) {
if(err) {
console.log(err);
}
else {
console.log("Success");
}
});
});
finder.on('error', function (err) {
console.log(err);
});
finder.on('end', function () {
console.log('Done!');
});
Based on the documentation, the Body parameter of s3.putObject can take a Buffer, Typed Array, Blob, String, or ReadableStream. The best one of those to use in most cases would be a ReadableString. You can create a ReadableString from any file using the createReadStream() function in the fs module.
So, that part your code would look something like:
finder.on('file', function (file, stat) {
console.log(file);
var params = {
Bucket: BUCKET_NAME,
Key: file,
Body: fs.createReadStream(file) // NOTE: You might need to adjust "file" so that it's either an absolute path, or relative to your code's directory.
};
s3.putObject(params, function(err) {
if(err) {
console.log(err);
}
else {
console.log("Success!");
}
});
});
I also want to point out that you might run in to a problem with this code if you pass it a directory with a lot of files. putObject is an asynchronous function, which means it'll be called and then the code will move on to something else while it's doing its thing (ok, that's a gross simplification, but you can think of it that way). What that means in terms of this code is that you'll essentially be uploading all the files it finds at the same time; that's not good.
What I'd suggest is to use something like the async module to queue your file uploads so that only a few of them happen at a time.
Essentially you'd move the code you have in your file event handler to the queue's worker method, like so:
var async = require('async');
var uploadQueue = async.queue(function(file, callback) {
var params = {
Bucket: BUCKET_NAME,
Key: file,
Body: fs.createReadStream(file) // NOTE: You might need to adjust "file" so that it's either an absolute path, or relative to your code's directory.
};
s3.putObject(params, function(err) {
if(err) {
console.log(err);
}
else {
console.log("Success!");
}
callback(err); // <-- Don't forget the callback call here so that the queue knows this item is done
});
}, 2); // <-- This "2" is the maximum number of files to upload at once
Note the 2 at the end there, that specifies your concurrency which, in this case, is how many files to upload at once.
Then, your file event handler simply becomes:
finder.on('file', function (file, stat) {
uploadQueue.push(file);
});
That will queue up all the files it finds and upload them 2 at a time until it goes through all of them.
An easier and arguably more efficient solution may be to just tar up the directory and upload that single tar file (also gzipped if you want). There are tar modules on npm, but you could also just spawn a child process for it too.

Saving an image stored on s3 using node.js?

I'm trying to write an image server that uses node.js to store images on s3. Uploading the image works fine, and I can download and view it correctly using an s3 browser client (I'm using dragondisk, specifically, but I've successfully downloaded it with other ones too), but when I download it with node and try to write it to disk, I'm unable to open the file (it says it may be damaged or use a file format that Preview does not recognize). I'm using the amazon sdk for node and fs to write the file. I know that you can pass an optional encoding to fs.writeFile, but I've tried them all and it doesn't work. I've also tried setting ContentType on putObject and ResponseContentType on getObject, as well as ContentEncoding and ResponseContentEncoding (and all of these things in various combinations). Same result. Here's some code:
var AWS = require('aws-sdk')
, gm = require('../lib/gm')
, uuid = require('node-uui')
, fs = require('fs');
AWS.config.loadFromPath('./amazonConfig.json');
var s3 = new AWS.S3();
var bucket = 'myBucketName'; // There's other logic here to set the bucket name.
exports.upload = function(req, res) {
var id = uuid.v4();
gm.format("/path/to/some/image.jpg", function(format){
var key = req.params.dir + "/" + id + "/default." + format;
fs.readFile('/path/to/some/image.jpg', function(err, data){
if (err) { console.warn(err); }
else {
s3.client.putObject({
Bucket: bucket,
Key: key,
Body: data,
ContentType: 'image/jpeg'
// I've also tried adding ContentEncoding (in various formats) here.
}).done(function(response){
res.status(200).end(JSON.stringify({ok:1, id: id}));
}).fail(function(response){
res.status(response.httpResponse.statusCode).end(JSON.stringify(({err: response})));
});
}
});
});
};
exports.get = function(req, res) {
var key = req.params.dir + "/" + req.params.id + "/default.JPEG";
s3.client.getObject({
Bucket: bucket,
Key: key,
ResponseContentType: 'image/jpeg'
// Tried ResponseContentEncoding here in base64, binary, and utf8
}).done(function(response){
res.status(200).end(JSON.stringify({ok:1, response: response}));
var filename = '/path/to/new/image/default.JPEG';
fs.writeFile(filename, response.data.Body, function(err){
if (err) console.warn(err);
// This DOES write the file, just not as an image that can be opened.
// I've tried pretty much every encoding as the optional third parameter
// and I've matched the encodings to the ResponseContentEncoding and
// ContentEncoding above (in case it needs to be the same)
});
}).fail(function(response){
res.status(response.httpResponse.statusCode).end(JSON.stringify({err: response}));
});
};
Incidentally, I'm using express for routing, so that's where req.params comes from.
For people who are still struggling with this issue. Here is the approach I used with native aws-sdk.
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./s3_config.json');
var s3Bucket = new AWS.S3( { params: {Bucket: 'myBucket'} } );
inside your router method :-
ContentType should be set to the content type of the image file
buf = new Buffer(req.body.imageBinary.replace(/^data:image\/\w+;base64,/, ""),'base64')
var data = {
Key: req.body.userId,
Body: buf,
ContentEncoding: 'base64',
ContentType: 'image/jpeg'
};
s3Bucket.putObject(data, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
console.log('succesfully uploaded the image!');
}
});
s3_config.json file is:-
{
"accessKeyId":"xxxxxxxxxxxxxxxx",
"secretAccessKey":"xxxxxxxxxxxxxx",
"region":"us-east-1"
}
Ok, after significant trial and error, I've figured out how to do this. I ended up switching to knox, but presumably, you could use a similar strategy with aws-sdk. This is the kind of solution that makes me say, "There has to be a better way than this," but I'm satisfied with anything that works, at this point.
var imgData = "";
client.getFile(key, function(err, fileRes){
fileRes.on('data', function(chunk){
imgData += chunk.toString('binary');
}).on('end', function(){
res.set('Content-Type', pic.mime);
res.set('Content-Length', fileRes.headers['content-length']);
res.send(new Buffer(imgData, 'binary'));
});
});
getFile() returns data chunks as buffers. One would think you could just pipe the results straight to front end, but for whatever reason, this was the ONLY way I could get the service to return an image correctly. It feels redundant to write a buffer to a binary string, only to write it back into a buffer, but hey, if it works, it works. If anyone finds a more efficient solution, I would love to hear it.
uploadfile(file, filename, folder) {
const bucket = new S3(
{
accessKeyId: 'enter your access key id here',
secretAccessKey: 'enter your secret key here.',
region: 'us-east-2'
});
const params = {
Bucket: 'enter your bucket here.',
Key: folder + '/' + filename + ".jpg",
ACL: 'public-read',
ContentEncoding : 'base64,',
Body: new Buffer(file.replace(/^data:image\/\w+;base64,/, ""),'base64'),
ContentType: 'image/jpeg'
};
bucket.upload(params, function (err, data) {
if (err) {
console.log('There was an error uploading your file: ', err);
return false;
}
console.log('Successfully uploaded file.', data);
return true;
});
}
As another solution. I fixed mine by using Body: fs.createReadStream instead and it worked like a charm.
const uploadFile = () => {
fs.readFile(filename, (err, data) => {
if (err) throw err;
const params = {
Bucket: `${process.env.S3_Bucket}/ProfilePics`, // pass your bucket name
Key: `${decoded.id}-pic.${filetypeabbrv}`, // file will be saved as testBucket/contacts.csv
Body: fs.createReadStream(req.file.path),
ContentType: filetype,
};
s3.upload(params, function (s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};

Resources