NodeJS + AWS SDK + S3 - how do I successfully upload a zip file? - node.js

I've been trying to upload gzipped log files to S3 using the AWS NodeJS sdk, and occasionally find that the uploaded file in S3 is corrupted/truncated. When I download the file and decompress using gunzip in a bash terminal, I get:
01-log_2014-09-22.tsv.gz: unexpected end of file.
When I compare file sizes, the downloaded file comes up just a tiny bit short of the original file size (which unzips fine).
This doesn't happen consistently...one out of every three files or so is truncated. Reuploading can fix the problem. Uploading through the S3 Web UI also works fine.
Here's the code I'm using...
var stream = fs.createReadStream(localFilePath);
this.s3 = new AWS.S3();
this.s3.putObject({
Bucket: bucketName,
Key: folderName + filename,
ACL: "bucket-owner-full-control",
Body: stream,
},function(err) {
// stream.close();
callback(err);
});
I shouldn't have to close the stream since it defaults to autoclose, but the problem seems to occur either way.
The fact that its intermittent suggests it's some sort of a timing or buffering issue, but I can't find any controls to fiddle with that might affect that. Any suggestions?
Thanks.

Related

Play audio directly from Lambda /tmp folder

I'm currently building a Alexa application in Node with Lambda. I have the need to convert and merge several audio files. I'm currently creating an audio file using google text-to-speech (long story on the need for it) which I write to /tmp and pulling an audio file from s3 which I also write to /tmp. I'm then using sox to merge the two files (see below) and write back to S3 (currently public) which I then have hard coded to play that particular clip.
My question is if it is possible to play audio directly from the /tmp folder as opposed to having to write the file back to S3.
await lambdaAudio.sox('-m /tmp/google-formatted.mp3 /tmp/audio.mp3 /tmp/result.mp3')
// get data from resulting mp3
const data = await readFile('/tmp/result.mp3');
const base64data = new Buffer(data, 'binary');
// put file back on AWS for playing
s3.putObject({
Bucket: 'my-bucket',
Key: 'result.mp3',
Body: base64data,
ACL:'public-read'
},function (resp) {
console.log('Done');
});
return`<audio src="https://s3.amazonaws.com/my-bucket/result.mp3" />`;
I usually upload the lambda function zipping the code and modules and in general all the files that my code requires.
https://developer.amazon.com/blogs/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface
So if you zip the /tmp directory and publish it as part of your lambda code the audio file will be accessible by your lambda function

S3 Video to audio file convert using Node js (Lambda function)

I am trying to convert S3 video file to audio file through Lambda function. Whenever video files are uploaded into an S3 bucket I have to generate an audio file and save it back to S3 bucket by triggering the AWS Lambda function. I can convert the video file to audio in local. ( Convert video to an audio file using FFMPEG). But I am wondering, how to do this conversion part in Lambda function every time the video file is uploaded into an S3 bucket. I have no idea how to do this AWS Lambda function. Please share your suggestions.
Sample code:
var ffmpeg = require('fluent-ffmpeg');
/**
* input - string, path of input file
* output - string, path of output file
* callback - function, node-style callback fn (error, result)
*/
function convert(input, output, callback) {
ffmpeg(input)
.output(output)
.on('end', function() {
console.log('conversion ended');
callback(null);
}).on('error', function(err){
console.log('error: ', e.code, e.msg);
callback(err);
}).run();
}
convert('./df.mp4', './output.mp3', function(err){
if(!err) {
console.log('conversion complete');
//...
}
});
Thanks,
You just need to set up an event on s3 bucket - put object - to trigger lambda function (you will get access to the description of the object uploaded to that S3 bucket through the first parameter of the lambda function).
If you can convert the video file to audio on your local machine, using some external libraries, then you need to create a zip file containing your lambda function (in the root of the zip file) as well as the dependencies.
This is pretty simple in case of Node. Create a new folder, run npm init, install needed modules, create index.js file where you put your Node code. Zip all the contents of this folder (not the folder itself). When you create new lambda function, choose to upload this zip file.
If you are wondering about how to programatically communicate with AWS resources and manipulate them, then check aws-sdk which you can import as a module and use it for that purpose.
So basically what you will need to inside of your lambda function is to parse event argument (the first parameter) to obtain bucket and key of the uploaded object. Then you will call s3.getObject method to get the data. Process the data with your custom logic. Call s3.putObject to store the newly transformed data to new S3 location.
Lambda has access to its own local file system, if your code needs to store some data there. You just need to specify absolute path to the file, such as /tmp/output.mp3. To retrieve it, you can use fs module. Then, you can continue with s3.putObject.

aws-sdk not deploying image to s3 bucket

I am using AWS Lambda to resize images in node.js by using aws-sdk andsharp
Issue I face is that it read file successfully and also apply resize operations but not put object after resize.
Even not giving any error also. I check cloud watch where everything is alright but image not place in resize folder.
It only create key folders but image not there
return Promise.all(_croppedFiles.map(_cropFile => {
return S3.putObject({
Body: _cropFile.buffer,
Bucket: dstBucket,
ContentType: _cropFile.config.contentType,
Key: dstKey
}).promise()
}))
There is actually no extension in the keyname, which makes it to be just a name and treated as a folder. provide your keyname as dstKey.jpeg or whatever extension you want , and set your content type to image/jpeg
No matter what's the format of your input image , the output image will always be stored in "jpeg" format

Not able to zip files which is present in s3 bucket using aws-s3-zipper and node js

I need to zip files specifically .gz format in aws s3 bucket. I didn't got any proper example for this. Does any one have any suggestion for this. Thanks in advance.
you can write a simple bash script using AWS CLI which will get file from S3 --> zip it and put it back on S3
Depending on the size of the file, Lambda might not be the right tool. That said, if your file isn't too large, this should work:
const zlib = require('zlib');
const passThrough = new stream.PassThrough();
const body = s3.getObject(s3Params).createReadStream().pipe(zlib.createGunzip()).pipe(passThrough);
const s3Out = {
Key: key,
Bucket: bucket,
Body: body
}
await s3.upload(s3Out).promise();
zlib is an included module, no need to install anything, just require it.

Save a partial video file locally using NodeJS

I have a serverless web application that is deployed on AWS and I have to take a screenshot from an uploaded video to S3. I am using ffmpeg to extract the screenshot but the only drawback is that I have to download the video file first in order to let ffmpeg work with it.
Knowing the fact I am using AWS Lambda and I don't have limits for video length users might upload large files which makes AWS Lambda to hit the storage limit.
To overcome this I thought of downloading a small chunk of the video and use it with ffmpeg to extract the thumbnail so using the S3.getOjbect method with range params I was able to download a chunk of the file but ffmpeg couldn't understand it.
Here is my code:
s3.getObject({
Bucket: bucketName,
Key: key,
Range: 'bytes=0-1048576'
}, (err, data) => {
fs.writeFile(fileName, data.Body, error => {
if (error)
console.log(error)
else
console.log('File saved');
})
})
And the code to extract the thumbnail:
const ffmpeg = require('fluent-ffmpeg');
new ffmpeg(fileName).screenshots({
timestamps: [0],
filename: 'thumb.png',
folder: '.'
})
And I am getting this error from ffmpeg
Error: ffmpeg exited with code 1: ./test.mp4: Invalid data found when processing input
I know there is a problem in saving the file like this but I couldn't find any solution that solves my problem. If anybody has one that would be much appreciated.
UPDATE:
It turns out that ffmpeg does this for me, I just gave it the url and it downloaded what it needs to render the screenshot without the need to download the file locally and the code looks like this:
const ffmpeg = require('fluent-ffmpeg');
new ffmpeg(url).screenshots({
timestamps: [0],
filename: 'thumb.png',
folder: '.'
})
To do that you would need to understand the format of mp4 and make sure you are fetching enough data to line up along a frame boundary, and then alter any headers so that ffmpeg can understand the partial data and doesn't think it just has a corrupted file.
While you're inside the AWS ecosystem you could try using Elastic Transcode to transcode the video and ask it to generate a thumbnail?
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/preset-settings.html#preset-settings-thumbnails

Resources