I have a serverless web application that is deployed on AWS and I have to take a screenshot from an uploaded video to S3. I am using ffmpeg to extract the screenshot but the only drawback is that I have to download the video file first in order to let ffmpeg work with it.
Knowing the fact I am using AWS Lambda and I don't have limits for video length users might upload large files which makes AWS Lambda to hit the storage limit.
To overcome this I thought of downloading a small chunk of the video and use it with ffmpeg to extract the thumbnail so using the S3.getOjbect method with range params I was able to download a chunk of the file but ffmpeg couldn't understand it.
Here is my code:
s3.getObject({
Bucket: bucketName,
Key: key,
Range: 'bytes=0-1048576'
}, (err, data) => {
fs.writeFile(fileName, data.Body, error => {
if (error)
console.log(error)
else
console.log('File saved');
})
})
And the code to extract the thumbnail:
const ffmpeg = require('fluent-ffmpeg');
new ffmpeg(fileName).screenshots({
timestamps: [0],
filename: 'thumb.png',
folder: '.'
})
And I am getting this error from ffmpeg
Error: ffmpeg exited with code 1: ./test.mp4: Invalid data found when processing input
I know there is a problem in saving the file like this but I couldn't find any solution that solves my problem. If anybody has one that would be much appreciated.
UPDATE:
It turns out that ffmpeg does this for me, I just gave it the url and it downloaded what it needs to render the screenshot without the need to download the file locally and the code looks like this:
const ffmpeg = require('fluent-ffmpeg');
new ffmpeg(url).screenshots({
timestamps: [0],
filename: 'thumb.png',
folder: '.'
})
To do that you would need to understand the format of mp4 and make sure you are fetching enough data to line up along a frame boundary, and then alter any headers so that ffmpeg can understand the partial data and doesn't think it just has a corrupted file.
While you're inside the AWS ecosystem you could try using Elastic Transcode to transcode the video and ask it to generate a thumbnail?
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/preset-settings.html#preset-settings-thumbnails
Related
I'm currently building a Alexa application in Node with Lambda. I have the need to convert and merge several audio files. I'm currently creating an audio file using google text-to-speech (long story on the need for it) which I write to /tmp and pulling an audio file from s3 which I also write to /tmp. I'm then using sox to merge the two files (see below) and write back to S3 (currently public) which I then have hard coded to play that particular clip.
My question is if it is possible to play audio directly from the /tmp folder as opposed to having to write the file back to S3.
await lambdaAudio.sox('-m /tmp/google-formatted.mp3 /tmp/audio.mp3 /tmp/result.mp3')
// get data from resulting mp3
const data = await readFile('/tmp/result.mp3');
const base64data = new Buffer(data, 'binary');
// put file back on AWS for playing
s3.putObject({
Bucket: 'my-bucket',
Key: 'result.mp3',
Body: base64data,
ACL:'public-read'
},function (resp) {
console.log('Done');
});
return`<audio src="https://s3.amazonaws.com/my-bucket/result.mp3" />`;
I usually upload the lambda function zipping the code and modules and in general all the files that my code requires.
https://developer.amazon.com/blogs/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface
So if you zip the /tmp directory and publish it as part of your lambda code the audio file will be accessible by your lambda function
I am trying to convert S3 video file to audio file through Lambda function. Whenever video files are uploaded into an S3 bucket I have to generate an audio file and save it back to S3 bucket by triggering the AWS Lambda function. I can convert the video file to audio in local. ( Convert video to an audio file using FFMPEG). But I am wondering, how to do this conversion part in Lambda function every time the video file is uploaded into an S3 bucket. I have no idea how to do this AWS Lambda function. Please share your suggestions.
Sample code:
var ffmpeg = require('fluent-ffmpeg');
/**
* input - string, path of input file
* output - string, path of output file
* callback - function, node-style callback fn (error, result)
*/
function convert(input, output, callback) {
ffmpeg(input)
.output(output)
.on('end', function() {
console.log('conversion ended');
callback(null);
}).on('error', function(err){
console.log('error: ', e.code, e.msg);
callback(err);
}).run();
}
convert('./df.mp4', './output.mp3', function(err){
if(!err) {
console.log('conversion complete');
//...
}
});
Thanks,
You just need to set up an event on s3 bucket - put object - to trigger lambda function (you will get access to the description of the object uploaded to that S3 bucket through the first parameter of the lambda function).
If you can convert the video file to audio on your local machine, using some external libraries, then you need to create a zip file containing your lambda function (in the root of the zip file) as well as the dependencies.
This is pretty simple in case of Node. Create a new folder, run npm init, install needed modules, create index.js file where you put your Node code. Zip all the contents of this folder (not the folder itself). When you create new lambda function, choose to upload this zip file.
If you are wondering about how to programatically communicate with AWS resources and manipulate them, then check aws-sdk which you can import as a module and use it for that purpose.
So basically what you will need to inside of your lambda function is to parse event argument (the first parameter) to obtain bucket and key of the uploaded object. Then you will call s3.getObject method to get the data. Process the data with your custom logic. Call s3.putObject to store the newly transformed data to new S3 location.
Lambda has access to its own local file system, if your code needs to store some data there. You just need to specify absolute path to the file, such as /tmp/output.mp3. To retrieve it, you can use fs module. Then, you can continue with s3.putObject.
I've been trying to upload gzipped log files to S3 using the AWS NodeJS sdk, and occasionally find that the uploaded file in S3 is corrupted/truncated. When I download the file and decompress using gunzip in a bash terminal, I get:
01-log_2014-09-22.tsv.gz: unexpected end of file.
When I compare file sizes, the downloaded file comes up just a tiny bit short of the original file size (which unzips fine).
This doesn't happen consistently...one out of every three files or so is truncated. Reuploading can fix the problem. Uploading through the S3 Web UI also works fine.
Here's the code I'm using...
var stream = fs.createReadStream(localFilePath);
this.s3 = new AWS.S3();
this.s3.putObject({
Bucket: bucketName,
Key: folderName + filename,
ACL: "bucket-owner-full-control",
Body: stream,
},function(err) {
// stream.close();
callback(err);
});
I shouldn't have to close the stream since it defaults to autoclose, but the problem seems to occur either way.
The fact that its intermittent suggests it's some sort of a timing or buffering issue, but I can't find any controls to fiddle with that might affect that. Any suggestions?
Thanks.
I'm trying to use the module node-fluent-ffmpeg (https://github.com/schaermu/node-fluent-ffmpeg) to transcode and stream a videofile. Since I'm on a Windows machine, I first downloaded FFMpeg from the official site (http://ffmpeg.zeranoe.com/builds/). Then I extracted the files in the folder C:/FFmpeg and added the path to the system path (to the bin folder to be precise). I checked if FFmpeg worked by typing in the command prompt: ffmpeg -version. And it gave a successful response.
After that I went ahead and copied/altered the following code from the module (https://github.com/schaermu/node-fluent-ffmpeg/blob/master/examples/express-stream.js):
app.get('/video/:filename', function(req, res) {
res.contentType('avi');
console.log('Setting up stream')
var stream = 'c:/temp/' + req.params.filename
var proc = new ffmpeg({ source: configfileResults.moviepath + req.params.filename, nolog: true, timeout: 120, })
.usingPreset('divx')
.withAspect('4:3')
.withSize('640x480')
.writeToStream(res, function(retcode, error){
if (!error){
console.log('file has been converted succesfully',retcode);
}else{
console.log('file conversion error',error);
}
});
});
I've properly setup the client with flowplayer and tried to get it running but
nothing happens. I checked the console and it said:
file conversion error timeout
After that I increased the timeout but somehow, It only starts when I reload the page. But of course immediately stops because of the page reload. Do I need to make a separate node server just for the transcoding of files? Or is there some sort of event I need to trigger?
I'm probably missing something simple but I can't seem to get it to work.
Hopefully someone can point out what I've missed.
Thanks
I've fixed it by using videoJs instead of Flowplayer. The way flowplayer was launched did not work properly in my case. So I init the stream and then initialize videojs to show the stream. Which works great.
I am generating a PNG on the server side of a node.js application, using ImageMagick and the gm library for node.js (GraphicsMagick for node.js).
// start with a blank image
var gmImage = gm(100, 100, "#000000ff");
// Draw the stuff on the new blank image
When I'm finished drawing stuff using the gm library, I am storing that image to the file system:
gmImage.write(imagePath, function (err) {
...
});
I am now moving to s3. I want to skip this previous step and write the image direct to s3 without using a temporary file.
Is there a way to write the gmImage to a buffer or something?
Take a look at the stream section of the API: https://github.com/aheckmann/gm#streams
You should be able to pipe stdout into s3
var gmImage = gm(100, 100, "#000000ff");
gmImage.stream(function (err, stdout, stderr) {
stdout.pipe(s3Stream);
});