Getting EXIF data from images using ImageMagick inside AWS Lambda - node.js

I'm trying to extract EXIF data from images using ImageMagick inside AWS Lambda but I can't find the way to do it.
I have a piece of code to resize the image, it's working fine but I want to add the part to extract EXIF data.
Here is what I have right now to resize images:
var im = require("gm").subClass({imageMagick: true});
var operation = im(image.buffer).autoOrient().resize(width, height, '^');
operation.toBuffer(image.imageType, function(err, buffer) {
if (err) {
//do something with the error
} else {
//do something with the image
}
});
Any idea how to extract the metadata from the image ?
Thanks.
C.C.

Related

Get Full Object From Oracle Object Storage using Node.js

I am working on an nodejs image server to read and write images on Oracle Object Storage.
The issue I am having is not getting the full image when using the function getObject using the Javascript api from oci-objectstorage
I have succesfully stored the following images.
1x1 image with the size of 70 bytes and another
5120 x 3200 image with size 2.9 MB
When I use the function getObject I am able to retrieve the full 1x1 image but when I attempt it with the 5120 x 3200 image, I can only get 15KB of 2.9MB
I used the following example from Oracle
https://github.com/oracle/oci-typescript-sdk/blob/master/examples/javascript/objectstorage.js
Below is the code that I am using to read the image from Oracle Object Storage
I have the below code in an async function.
router.get('/data/', async function (req, res, next) {
let path = req.query.image_data
fs.access(imagePath, fs.F_OK, async (err) => {
if (err) {
const provider = new common.ConfigFileAuthenticationDetailsProvider();
const client = new os.ObjectStorageClient({
authenticationDetailsProvider: provider
});
const compartmentId = config.COMPARTMENTID
const bucket = config.BUCKET
const request = {};
const response = await client.getNamespace(request);
const namespace = response.value;
const getObjectRequest = {
objectName: imagePath,
bucketName: bucket,
namespaceName: namespace
};
const getObjectResponse = await client.getObject(getObjectRequest);
const head = getObjectResponse.value._readableState.buffer.head.data.toString('base64')
const tail = getObjectResponse.value._readableState.buffer.tail.data.toString('base64')
await fs.writeFile(imagePath, completeImage, {encoding: 'base64'},function(err) {
if (err) return
res.sendFile(path, {root: './imagefiles'}) //using express to serve the image file
});
}
//file exists
res.sendFile(path, {root: './imagefiles'});
})
})
It seems to me that the head and tail both have the same data. I am trying to then write the image using fs.write which then with the large image only write a small portion of the image while with the small 1x1 image it writes the full image.
I am not sure if its an issue with my use of the async/await setup or I may have to use a better implementation using promises that may allow to download the full image.
Any ideas on how to tackle this?
Another small issue I am having is serving the image after writing it. On the webpage I get an error saying could not display the image because it contains errors. But after I refresh the page again, which finds the image since it now exists on disk, it is able to display the image and does not show the previous error.

Cropping larger images to multiple sizes in node.js

I am creating an image uploading system (size usually >20MB<50MB) and i want to crop that images to various sizes (its for viewing mobile,web and desktop application),all images are stored into AWS s3.
Here is the snapshot of crop-sizes
[{
width:200,
height:200,
type:"small",
platform:"web"},
{
width:300,
height:400,
type:"small",
platform:"mobile-android"
}
....
....
]
Here is the think i am planned to do
1.First upload the image into S3.
2.Run all the crop operations in async task
upload:function(req,res){
//various cropsizes
var cropSizes = [];
//upload image to s3
uploadImageToS3(req.file,function(err,result){
if(!err){
//create crop
cropImage({
'cropsizes':cropSizes,
'file':req.file
},function(err,result){
console.log('all crop completed',result);
});
res.send('run crop in backgroud');
}
});
}
But is this correct method?? can anyone have better thing other than this???
Since you are already using s3 I would recommend trying aws lambda to resize your images and adding them back to s3 bucket with new sizes.
Here is detailed explanation in this link https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/

How do I save a local image to S3 from my Lambda function in Node

I have a Node script that is attempting to do some image manipulation then save the results to S3. The script seems to work, but when I run it the resulting image is just a blank file in s3. I've tried using the result image, the source image etc, just to see if maybe see if it's the image ... I tried Base64 encoding and just passing the image file. Not really sure what the issue is.
var base_image_url = '/tmp/inputFile.jpg';
var change_image_url = './images/frame.png';
var output_file = '/tmp/outputFile.jpg';
var params = {
Bucket: 'imagemagicimages',
Key: 'image_'+num+'.jpg',
ACL: "public-read",
ContentType: 'image/jpeg',
Body: change_image_url
}
s3.putObject(params, function (err, data) {
if (err)
{
console.log(err, err.stack);
} // an error occurred
else
{
callback("it");
console.log(data);
}
});
It looks like this line…
Body: change_image_url
…is saving the string './images/frame.png' to a file. You need to send image data to S3 not a string. You say you are doing image manipulation, but there's no code for that. If you are manipulating and image, then you must have the image data in buffer somewhere. This is what you should be sending to S3.

resizing image while saving it's exif orientation with node-gm

I'm writing a nodeJS 5.3.0 application using gm (http://aheckmann.github.io/gm/)
I know that it uses the GraphicsMagicK library.
the problem is that I'm having is that after I resize an image, it loses it's exif format. the code samples actually shows that the exif format is lost.
for example:
var fs = require('fs')
, gm = require('gm').subClass({imageMagick: true});
// resize and remove EXIF profile data
gm('/path/to/my/img.jpg')
.resize(240, 240)
in this example they say that exif profile data is removed.
I know that I can get the orientation of an image before resizing using:
gm('path/tp/my/img.jpg').orientation(function(err,value){
var orientation = value;
});
the question is..
can I preserve exif data when resizing ? and If not.. can I set exif orientation data after resizing ?
thanks
More specifically in the following code, only noProfile() function remove EXIF, so if you remove it you can preserve EXIF data
// resize and remove EXIF profile data
gm('/path/to/my/img.jpg')
.resize(240, 240)
.noProfile()
.write('/path/to/resize.png', function (err) {
if (!err) console.log('done');
});
Otherwise you can check the gm doc here

Resize Image From S3 with Javascript

I am using the knox package to connect my S3 account and pull an image, like this:
var picturestring;
knoxclient.get(key).on('response', function(res){
console.log(res.statusCode);
console.log(res.headers);
res.setEncoding('base64');
res.on('data', function(chunk){
picturestring += chunk;
});
res.on('end', function () {
console.log(picturestring);
resizeimage(picturestring, done); // use the resize library in this function
});
}).end();
After that, I want to use a library that can take in that string (picturestring), resize the image, and return a new base64 string that represents the resized image. At this point, I plan on uploaded the resized image to S3.
I wrote a similar script in Golang that let me resize images like this, but every JS resizing library I've reviewed only give examples on resizing images from the local file system.
Is there any way that I can avoid reading the image from S3 into the file system, and focus on dealing with the returned string exclusively??
***************UPDATE***********************
function pullFromS3 (key, done) {
console.log("This is the key being pulled from Amazon: ", key);
var originalstream = new MemoryStream(null, {readable: false});
var picturefile;
client.get(key).on('response', function(res){
console.log("This is the res status code: ", res.statusCode);
res.setEncoding('base64');
res.pipe(originalstream);
res.on('end', function () {
resizeImage(originalstream, key, done);
});
}).end();
};
function resizeImage (originalstream, key, done) {
console.log("This is the original stream: ", originalstream.toString());
var resizedstream = new MemoryStream(null, {readable: false});
var resize = im().resize('160x160').quality(90);
// getting stuck here ******
originalstream.pipe(resize).pipe(resizedstream);
done();
};
I can't seem to get a grip on how the piping from originalstream --> to the resize ImageMagick function ---> to the resizestream works. Ideally, the resizestream should hold the base64 string for the resized image, which I can then upload to S3.
1) How do I wait for the piping to finish, and THEN use the data in resizedstream?
2) Am I doing the piping correctly? I can't debug it because I am unsure how to wait for the piping to finish!
I'm not using S3 but a local cloud provider in China to store images and their thumbnails. In my case I was using imagemagick library with imagemagick-stream and memorystream modules.
imagemagick-stream provides a way to process image with imagemagick through Stream so that I don't need to save the image in local disk.
memorystream provides a way to store the source image and thumbnail image binaries in memory, and with the ability to read/write to Stream.
So the logic I have is
1, Retrieve the image binaries from the client POST request.
2, Save the image into memory using memorystream
3, Upload it to, in your case, S3
4, Define the image process action in imagemagick-stream, for example resize to 180x180
5, Create a read stream from the original image binaries in step 1 using memorystream, pipe into imagemagick-stream created in step 4, and then pipe into a new memory writable created by memorystream where stores the thumbnail.
6, Upload the thumbnail I got in step 5 to S3.
The only problem in my solution is that, your virtual machine might run out of memory if many huge images came. But I know this should not be happened in my case so that's OK. But you'd better evaluate by yourself.
Hope this helps a bit.

Resources