Store generated ImageMagick image to s3 without temp files - node.js

I am generating a PNG on the server side of a node.js application, using ImageMagick and the gm library for node.js (GraphicsMagick for node.js).
// start with a blank image
var gmImage = gm(100, 100, "#000000ff");
// Draw the stuff on the new blank image
When I'm finished drawing stuff using the gm library, I am storing that image to the file system:
gmImage.write(imagePath, function (err) {
...
});
I am now moving to s3. I want to skip this previous step and write the image direct to s3 without using a temporary file.
Is there a way to write the gmImage to a buffer or something?

Take a look at the stream section of the API: https://github.com/aheckmann/gm#streams
You should be able to pipe stdout into s3
var gmImage = gm(100, 100, "#000000ff");
gmImage.stream(function (err, stdout, stderr) {
stdout.pipe(s3Stream);
});

Related

Play audio directly from Lambda /tmp folder

I'm currently building a Alexa application in Node with Lambda. I have the need to convert and merge several audio files. I'm currently creating an audio file using google text-to-speech (long story on the need for it) which I write to /tmp and pulling an audio file from s3 which I also write to /tmp. I'm then using sox to merge the two files (see below) and write back to S3 (currently public) which I then have hard coded to play that particular clip.
My question is if it is possible to play audio directly from the /tmp folder as opposed to having to write the file back to S3.
await lambdaAudio.sox('-m /tmp/google-formatted.mp3 /tmp/audio.mp3 /tmp/result.mp3')
// get data from resulting mp3
const data = await readFile('/tmp/result.mp3');
const base64data = new Buffer(data, 'binary');
// put file back on AWS for playing
s3.putObject({
Bucket: 'my-bucket',
Key: 'result.mp3',
Body: base64data,
ACL:'public-read'
},function (resp) {
console.log('Done');
});
return`<audio src="https://s3.amazonaws.com/my-bucket/result.mp3" />`;
I usually upload the lambda function zipping the code and modules and in general all the files that my code requires.
https://developer.amazon.com/blogs/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface
So if you zip the /tmp directory and publish it as part of your lambda code the audio file will be accessible by your lambda function

S3 Video to audio file convert using Node js (Lambda function)

I am trying to convert S3 video file to audio file through Lambda function. Whenever video files are uploaded into an S3 bucket I have to generate an audio file and save it back to S3 bucket by triggering the AWS Lambda function. I can convert the video file to audio in local. ( Convert video to an audio file using FFMPEG). But I am wondering, how to do this conversion part in Lambda function every time the video file is uploaded into an S3 bucket. I have no idea how to do this AWS Lambda function. Please share your suggestions.
Sample code:
var ffmpeg = require('fluent-ffmpeg');
/**
* input - string, path of input file
* output - string, path of output file
* callback - function, node-style callback fn (error, result)
*/
function convert(input, output, callback) {
ffmpeg(input)
.output(output)
.on('end', function() {
console.log('conversion ended');
callback(null);
}).on('error', function(err){
console.log('error: ', e.code, e.msg);
callback(err);
}).run();
}
convert('./df.mp4', './output.mp3', function(err){
if(!err) {
console.log('conversion complete');
//...
}
});
Thanks,
You just need to set up an event on s3 bucket - put object - to trigger lambda function (you will get access to the description of the object uploaded to that S3 bucket through the first parameter of the lambda function).
If you can convert the video file to audio on your local machine, using some external libraries, then you need to create a zip file containing your lambda function (in the root of the zip file) as well as the dependencies.
This is pretty simple in case of Node. Create a new folder, run npm init, install needed modules, create index.js file where you put your Node code. Zip all the contents of this folder (not the folder itself). When you create new lambda function, choose to upload this zip file.
If you are wondering about how to programatically communicate with AWS resources and manipulate them, then check aws-sdk which you can import as a module and use it for that purpose.
So basically what you will need to inside of your lambda function is to parse event argument (the first parameter) to obtain bucket and key of the uploaded object. Then you will call s3.getObject method to get the data. Process the data with your custom logic. Call s3.putObject to store the newly transformed data to new S3 location.
Lambda has access to its own local file system, if your code needs to store some data there. You just need to specify absolute path to the file, such as /tmp/output.mp3. To retrieve it, you can use fs module. Then, you can continue with s3.putObject.

Get 'sharp' metadata after piping through another module

I want to compress an image using Sharp image processing library, pass it through an external quant library and then get the sharp metadata for it. I want to do this to actually overlay the compressed image size onto the image (during development only).
For a WEBP this is easy because everything is in the sharp pipeline.
// specify the compression
myImage.webp({ quality: 80 });
// actually compress it
var tempBuffer = await myImage.toBuffer({ resolveWithObject: true});
// create a new sharp object to read the metadata
var compressedImage = sharp(tempBuffer.data);
// Image data is now available in
console.log(compressedImage.info.size / 1024);
But when using the quant library I'm piping it into a third party library and so it's no longer a sharp object. I need to get the raw buffer out again in the most efficient way. I'm new to Node.js and don't know how to do this.
resizedImage.png()
.pipe(new quant(['--quality=50-70', '--speed', '1', '-']));
Do I need to use something like https://www.npmjs.com/package/stream-to-array ?
That seems crazy to me! Am I missing something?
Figured it out. You can just pipe it back into sharp()like this:
resizedImage.png()
.pipe(new quant(['--quality=50-70', '--speed', '1', '-']))
.pipe(sharp());
Then you can call metadata() or further resizing etc. (not that you'd normally do that!)

Pipe image output from Graphics Magic to response without base64 encoding

I have a Node server where, instead of storing cropped images, I want to crop them in response to an AJAX call, and send them to the client that way. I'm storing the information of what to crop and how to crop it in cookies and the body. On the server I crop it, encode it in base64, and send it back the user. Here is what my code looks like
res.set('Content-Type', 'image/jpeg');
gm(request(body.URL))
.crop(req.cookies[name+"Width"],req.cookies[name+"Height"],req.cookies[name+"X"],req.cookies[name+"Y"])
.stream(function streamOut (err, stdout, stderr) {
if (err) return next(err);
stdout.pipe(base64encode()).pipe(res);
stdout.on('error', next);
});
This works, but I don't like it. I was only able to get this to work by encoding it in base64, but on the client side this is seems slow to decode this to an image. I would rather just send an image directly, but I was unable to get this to work. Pipping the image without decoding it resulted in a gibberish response from the server. Is there a better way to do this? Or does the unsaved image have to be encoded in order to send?
I have no idea how you write node.js - it all looks like a bunch of dots and parentheses to me, but using what I know about the GraphicsMagick command line, I tried this and it does what I think you want - which is to write a JPEG encoded result on stdout:
// Send header "Content-type: image/jpeg"...
var gm = require('gm');
var input = 'input.jpg';
gm(input).resize(350).toBuffer('JPG',function (err, buffer) {
if (err) return handle(err);
process.stdout.write(buffer);
})
Update
Have you considered ruling out the AJAX aspects and just using a static src for your image that refers to the node script? As I said, I do not know node and Javascript but if I generate a thumbnail via a PHP script, I would add this into the HTML
<img src="/php/thumb.php"/>
So that just invokes a PHP script to generate an image. If you remove the /php/thumb.php and replace that with however you have named the node script I suggested above, it should tell you whether the problem is the AJAX or the GraphicsMagick aspects...

Composite images in Graphicsmagick

I'm trying to request an image from an API and "paste" it on top of another image. In Photoshop, I would paste the image into a new layer and then merge the layers. I can accomplish this with Graphicsmagick using gm's composite().
gm().command("composite")
.in("path/to/topImg.png")
.in("path/to/bottomImg.png")
.toBuffer('PNG', function(err, buffer) {
if (!err) {return buffer;}
});
However, composite only takes file paths. So let's say I want to get the logo from http://www.google.com. I could save the image, use it in the code above, and then delete it. What I'm looking for is a way to accomplish this without having to save the image to disk first.
You can use URL directly as image path, without downloading and saving it
gm()
.command("composite")
.in("http://someurl...")
.in("http://someurl...")
.toBuffer('PNG', function(err, buffer) {
if (!err) {return buffer;}
});
But GraphicsMagick uses the HTTP support from libxml2, which does not currently support HTTPS. So if you want to download images over HTTPS you will need external program.

Resources