download image on aws lambda nodejs - node.js

I need to download an image to my aws lambda function and use it for later use.
I have tried to use http.get() method but it requires local file system to place the image, which i guess is not available in case of lambda function.
I have also tried to use request.get method which is also not returning correct response to me.
Currently my function looks like:
function download_image(image_url){
return new Promise(resolve =>{
request.get(image_url, function (error, response, body) {
if (!error && response.statusCode == 200) {
// let data = "data:" + response.headers["content-type"] + ";base64," + new Buffer(body).toString('base64');
resolve("Downloaded")
}
else{
resolve("Failed Downloaded")
}
});
});
}
I am openly looking for a way to store image on s3 or if I can store it in dynamo db using any format.
Any help will be appreciated.

im fairly sure you can use http.get in a lambda.
in concept you'd be doing the request, saving it to a byte array or buffer then writing that to s3. s3 makes sense for files and the retrieving is way easier than dynamodb, and also dynamodb you pay for writes and reads.
Saving an image stored on s3 using node.js?

Related

How to send a stream of data from a JS application to Lamda running on NodeJS and return concurrently

I have a Angular JS application, from this application I send data to my AWS API endpoint
/**
* Bulk sync with master
*/
async syncDataWithMaster(): Promise<AxiosResponse<any> | void> {
{
axios.defaults.headers.post.Authorization = token;
const url = this.endpoint;
return axios.post(url, compressed, {
onUploadProgress: progressEvent => {
console.log('uploading')
},
onDownloadProgress: progressEvent => {
console.log('downloading')
},
}).then((response) => {
if (response.data.status == 'success') {
return response;
} else {
throw new Error('Could not authenticate user');
}
});
} catch (e) {
}
return;
}
the api gateway triggers my Lambda function (NodeJS) with the data it received:
exports.handler = async (event) => {
const localData = JSON.parse(event.body);
/**
Here get data from master and compare with local data and send back any new data
**/
const response = {
statusCode: 200,
body: JSON.stringify(newData),
};
return response;
};
the lambda function will call the database and get the master data for a user (not shown in the example) and then this data is compare using various logic with the local data and it determines if we need to send any new rows back to the local device to be store/ updated. (Before anyone asks, the nature of the application needs full data)
This principle works great for 90% of my users. However some users have fairly large amounts of data the current maximum being around 17mb of data.
So my question is is it possible to stream the data to and from the lambda function? So stream the data to the function, process and stream back? So that it is not affected by payload limits from AWS?
Or is it possible to somehow, begin sending data to the function as a stream and then as data becomes available it starts streaming data back at the same time?
(Data is JSON format)
I am wondering what alternatives to this solution (as it need to be fairly quick as well max 30sec)
(One other idea I had was for certain data above a certain size, frist client saves to s3 using signed url. The calls the api gateway for lambda. Lambda gets the saved file and compare to master. New data to be returned saved to s3 if over certain size. Then signed url returned to client. Client downloads the new data and processes) - However I am not sure if this of cost effective and it sounds live execution time may be long
Thanks for any help, been trying to figure this out for a while now

Lambda stream encoded string

I stored the audio file in an array buffer and encoded it with base64. I need to send the data from the lambda to react client. For the larger audio files, I'm facing a lambda payload limit error.
Is there any way to stream the data in chunks from the Lambda to client ?
function readFile(filepath, callback) {
//Uint8Array
fs.readFile(filepath, (err, data) => {
// Data is a Buffer object
if (err) console.log(err);
callback(data);
})
readFile(`${outputFile}`, function (data) {
try {
let base64enc = base64.encode(data);
responseBody.message = base64enc;
status = statusCodes.OK;
return sendResponse(status, responseBody);//Sending response
} catch (err) {
console.log("error " + err);
reject(err);
}
});
No, there isn't. Each Lambda function call can return one payload and Lambda function calls are independent. I suggest two possible solutions.
Request a specific chunk. You can call the Lambda function multiple times with each call requesting a specific chunk of the data and merging them together as one data (file in your case) on the frontend.
Use S3. You usually handle media files using S3. Assuming you already have the audio file on S3, you generate and return the pre-signed get URL object in Lambda, and use the url to get the object on the frontend. (You can refer to code on Presigned URL generation code times-out as Lambda, works locally or other sources). You can also upload audio files by getting the pre-signed put URL in Lambda and using the url on the frontend to upload.
I would suggest the second solution because it is a more standard way of dealing with media files.

AWS lambda Converting inbound PDF to JPGS

Currently i am doing a simple copy with a lambda function in node.js where i copy an incoming pdf file to another bucket.
What i would like to do is copy that PDF and create a jpg of each page. i currently have a back end process doing this with imagemagick but would like to move it into my lambda function maybe with using gm?
Here is my current code.
var params = {
CopySource: srcBucket + '/' + srcKey,
Bucket: destinationbucket,
Key: outfile.pdf
};
s3.copyObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
context.succeed('exit');
});
ImageMagic is available for NodeJS Lambda functions. From the documentation:
If you author your Lambda function code in Node.js, the following
libraries are available in the AWS Lambda execution environment so you
don't need to include them:
ImageMagick: Installed with default settings. For versioning
information, see imagemagick nodejs wrapper and ImageMagick native
binary (search for "ImageMagick").
So you should be able to move your current solution to Lambda fairly easily.

Serving out saved Buffer from Mongo

I'm trying to serve out images that I have stored in a Mongo document. I'm using express, express-resource and mongoose.
The data, which is a JPG, is stored in a Buffer field in my schema. Seems like it's getting there correctly as I can read the data using the cli.
Then I run a find, grab the field and attempt sending it. See code:
res.contentType('jpg');
res.send(img);
I don't think it's a storage issue because I'm performing the same action here:
var img = fs.readFileSync(
__dirname + '/../../img/small.jpg'
);
res.contentType('jpg');
res.send(img);
In the browser the image appears (as a broken icon).
I'm wondering if it's an issue with express-resource because I have the format set to json, however I am indeed overriding the content type before sending the data.
scratches head
I managed to solve this myself. Seems like I was using the right method to send the data from express, but wasn't storing it properly (tricky!).
For future reference to anyone handling image downloads and managing them in Buffers, here is some sample code using the request package:
request(
{
uri: uri,
encoding: 'binary'
},
function (err, response, body)
{
if (! err && response.statusCode == 200)
{
var imgData = new Buffer(
body.toString(),
'binary'
).toString('base64');
callback(null, new Buffer(imgData, 'base64'));
}
}
);
Within Mongo you need to setup a document property with type Buffer to successfully store it. Seems like this issue was due to how I was saving it into Mongo.
Hopefully that saves someone time in the future. =)

Turn on Server-side encryption and Get Object Version in Amazon S3 with knox and nodejs

So far I've been able to successfully use node.js, express, and knox to add/update/delete/retrieve objects in Amazon S3. Trying to move things to the next level I'm trying to figure out how to use knox (if it's possible) to do two things:
1) Set the object to use server-side encryption when adding/updating the object.
2) Get a particular version of an object or get a list of versions of the object.
I know this is an old question, but it is possible to upload a file with knox using server-side encryption by specifying a header:
client.putFile('test.txt', '/test.txt', {"x-amz-server-side-encryption": "AES256"}, function(err, res) {
//Do something here
});
Andy (who wrote AwsSum) here.
Using AwsSum, when you put an object, just set the 'ServerSideEncryption' to the value you want (currently S3 only supports 'AES256'). Easy! :)
e.g.
var body = ...; // a buffer, a string, a stream
var options = {
BucketName : 'chilts',
ObjectName : 'my-object.ext',
ContentLength : Buffer.byteLength(body),
Body : body,
ServerSideEncryption : 'AES256'
};
s3.PutObject(options, function(err, data) {
console.log("\nputting an object to pie-18 - expecting success");
console.log(err, 'Error');
console.log(data, 'Data');
});

Resources