AWS Lambda permission denied when trying to use ffmpeg - linux

I want to write a handler that responds to S3 put events to convert any avi files that are uploaded to mp4. I doing it in Java, in Eclipse, with the AWS toolkit plugin. For video conversion, I am using ffmpeg with ffmpeg-cli-wrapper, and I have provided a static (linux) binary of ffmpeg in the source tree.
I have found that when I upload the function, the binary gets put in /var/task, but when I try to use the test function I've written, I get a "permission denied" error.
import net.bramp.ffmpeg.FFmpeg;
public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {
private static final String FFMPEG = "/var/task/ffmpeg";
public String handleRequest(S3Event event, Context context) {
try {
FFmpeg ff = new FFmpeg(FFMPEG);
System.out.println(ff.version());
} catch (Exception e) {
e.printStackTrace();
}
return "foo";
}
}
And the first line of the stacktrace: java.io.IOException: Cannot run program "/var/task/ffmpeg": error=13, Permission denied.
How do I execute this binary? I have done as others have suggested and chmod 755 the binary before uploading, but it hasn't made a difference.

AWS Lambda runs on Amazon Linux. It is a known issue. Try building (with static enabled) and check if it works on Amazon Linux and upload that binary. You do not have the privileges to chmod the files in /var/task/. Or try this solution that works:
Move ffmpeg to /tmp
chmod 755 /tmp/ffmpeg
Call /tmp/ffmpeg
See this discussion for more info.

I ran into this issue recently, and after messing with various manual solutions, what really solved the issue was:
Create a Lambda Layer, with only the ffmpeg binary inside a bin/ folder
Create a Lambda Function to implement said layer, and in the python code run /opt/bin/ffmpeg
See https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/

As helloV mentioned, you might have to include a static ffmpeg binary and copy it to a location and execute from there.
A detailed answer, (node.js code) is given here

Related

Write txt file in linux for .net Core (Docker)

I am new in Linux and my API was created in .net core and running in Docker. The system i create will write/create a txt file that will input all errors logged in the API. My code to write is this
`public class WriteLogs
{
public void ErrorLogFile(string traceNo, string errorMsg)
{
DirectoryInfo dir = new DirectoryInfo(Startup.errorPath);
if (!dir.Exists)
{
dir.Create();
}
using (StreamWriter swLog = File.AppendText(Startup.errorPath + Startup.errorFileName + DateTime.Now.ToString("MMddyyyy") + ".txt"))
{
swLog.WriteLine(DateTime.Now.ToString("yyyy-MM-dd hh:mm:ss.fff") + " - Trace Number : " + traceNo + " " + errorMsg + "\n");
}
}
}`
the value in my startup is located in my appSettings.json file :
"ErrorPath": "C:\\BP\\",
"ErrorFileName": "BP-ParamLogs_",
This is working in windows environment but when i transfer my program to linux and change the ErrorPath to:
"ErrorPath": "/home/Logs/",
the file was not created.
My question is, do my syntax works in linux to write txt file or my path was wrong?
To answer your specific question about the syntax. \home\Logs\ should be /home/Logs/ as Linux uses forward-slash for path separator.
There's a chance that your program does not have write access to the /home/ directory. I've personally tried a similar program and I ran into System.UnauthorizedAccessException during the logs directory creation step.
Try running your program from the terminal with dotnet run to see the exception you might be getting. If you are also running into the System.UnauthorizedAccessException then run as root with sudo dotnet run
i just solved the problem by using command:
find . -name BP-Param*
from the first directory. After executing it. it was in the docker directory.

Groovy: No such file exception but file is there ? Copying files on crossplateforms

I have an issue with Groovy\Jenkins when trying to copy files
The code I use is the following:
public void copy(String sources, String destination) {
Path source = Paths.get( join(this.script.WORKSPACE, sources) );
Path target = Paths.get( join(this.script.WORKSPACE, destination) );
Files.copy(source, target)
}
this.script.WORKSPACE is Jenkins workspace, and if this workspace is C:\Jenkins\Workspace\MyBranch and the sources are binaries\mybinary.dll then the join function will return:
C:\Jenkins\Workspace\MyBranch\mybinary.dll
At execution I receive the following error:
java.nio.file.NoSuchFileException: Y:\Jenkins\workspace\MyBranch\mybinary.dll
However the file is there, on the agent.
The thing is that I was using xcopy because I had to copy only on windows target (and it works without any issue, I isolated the change to the copy function, and now the windows copy is failling).
But now I have also to copy on redhat plateforms.
So I am looking for a crossplateform solution
Thank you !
So I found out this is a Jenkins related issue. Actually the pipeline is executed on the master, not the agent, so the file is looked for on the master, on which it does not exist.
I will have to use either sh scripts, or the jenkins stash function but it does not seem like I can have a cross plateform code here.

AWS Lambda function - convert PDF to Image

I am developing application where user can upload some drawings in pdf format. Uploaded files are stored on S3. After uploading, files has to be converted to images. For this purpose I have created lambda function which downloads file from S3 to /tmp folder in lambda execution environment and then I call ‘convert’ command from imagemagick.
convert sourceFile.pdf targetFile.png
Lambda runtime environment is nodejs 4.3. Memory is set to 128MB, timeout 30 sec.
Now the problem is that some files are converted successfully while others are failing with the following error:
{ [Error: Command failed: /bin/sh -c convert /tmp/sourceFile.pdf
/tmp/targetFile.png convert: %s' (%d) "gs" -q -dQUIET -dSAFER -dBATCH
-dNOPAUSE -dNOPROMPT -dMaxBitmap=500000000 -dAlignToPixels=0 -dGridFitTT=2 "-sDEVICE=pngalpha" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-r72x72" "-sOutputFile=/tmp/magick-QRH6nVLV--0000001" "-f/tmp/magick-B610L5uo"
"-f/tmp/magick-tIe1MjeR" # error/utility.c/SystemCommand/1890.
convert: Postscript delegate failed/tmp/sourceFile.pdf': No such
file or directory # error/pdf.c/ReadPDFImage/678. convert: no images
defined `/tmp/targetFile.png' #
error/convert.c/ConvertImageCommand/3046. ] killed: false, code: 1,
signal: null, cmd: '/bin/sh -c convert /tmp/sourceFile.pdf
/tmp/targetFile.png' }
At first I did not understand why this happens, then I tried to convert problematic files on my local Ubuntu machine with the same command. This is the output from terminal:
**** Warning: considering '0000000000 XXXXX n' as a free entry.
**** This file had errors that were repaired or ignored.
**** The file was produced by:
**** >>>> Mac OS X 10.10.5 Quartz PDFContext <<<<
**** Please notify the author of the software that produced this
**** file that it does not conform to Adobe's published PDF
**** specification.
So the message was very clear, but the file gets converted to png anyway. If I try to do convert source.pdf target.pdf and after that convert target.pdf image.png, file is repaired and converted without any errors. This doesn’t work with lambda.
Since the same thing works on one environment but not on the other, my best guess is that the version of Ghostscript is the problem. Installed version on AMI is 8.70. On my local machine Ghostsript version is 9.18.
My questions are:
Is the version of ghostscript problem? Is this a bug with older
version of ghostscript? If not, how can I tell ghostscript (with or
without using imagemagick) to repair or ignore errors like it does on
my local environment?
If the old version is a problem, is it possible to build ghostscript
from source, create nodejs module and then use that version of
ghostscript instead the one that is installed?
Is there an easier way to convert pdf to image without using
imagemagick and ghostscript?
UPDATE
Relevant part of lambda code:
var exec = require('child_process').exec;
var AWS = require('aws-sdk');
var fs = require('fs');
...
var localSourceFile = '/tmp/sourceFile.pdf';
var localTargetFile = '/tmp/targetFile.png';
var writeStream = fs.createWriteStream(localSourceFile);
writeStream.write(body);
writeStream.end();
writeStream.on('error', function (err) {
console.log("Error writing data from s3 to tmp folder.");
context.fail(err);
});
writeStream.on('finish', function () {
var cmd = 'convert ' + localSourceFile + ' ' + localTargetFile;
exec(cmd, function (err, stdout, stderr ) {
if (err) {
console.log("Error executing convert command.");
context.fail(err);
}
if (stderr) {
console.log("Command executed successfully but returned error.");
context.fail(stderr);
}else{
//file converted successfully - do something...
}
});
});
You can find a compiled version of Ghostscript for Lambda in the following repository.
You should add the files to the zip file that you are uploading as the source code to AWS Lambda.
https://github.com/sina-masnadi/lambda-ghostscript
This is an npm package to call Ghostscript functions:
https://github.com/sina-masnadi/node-gs
After copying the compiled Ghostscript files to your project and adding the npm package, you can use the executablePath('path to ghostscript') function to point the package to the compiled Ghostscript files that you added earlier.
Its almost certainly a bug, or perhaps limitation, with the older version of Ghostscript.
Many PDF producers create PDF files which do not conform to the specification, and yet will open without complain in Adobe Acrobat. Ghostscript endeavours to do the same, but obviously we can't know what Acrobat is going to allow, so we are continually chasing this nebulous target. (FWIW that warning is a legitimate out-of-spec PDF file).
There's nothing you can do with the old version other than replace it.
Yes you can build Ghostscript from source, I have no idea about a nodejs module, not sure why that's relevant.
There are numerous other applications which will render a PDF file, MuPDF is another one I know of. And, of course, you can use Ghostscript directly without using ImageMagick. Of course, if you can load another application, then you should simply be able to replace your Ghostscript installation too.
The version of GS on aws is an old version with known bugs. We can get around this by uploading an x64 GS file, compiled specifically for Linux. Then upload that using new AWS lambda layers. I have written a node function that does just this here:
https://github.com/rcastoro/PDFImagine
Make sure you have that GS layer for your lambda, however!

Unable to load so file from Java in Eclipse On Ubuntu

I have some code that tries to load a C library as follows :-
public ThreadAffinity() {
ctest = (CTest) Native.loadLibrary("ctest", CTest.class);
}
However I get the following error when trying to build the project; The error I get is as follows :-
UnsatisfiedLinkError: Unable to load library 'libctest': liblibctest.so: cannot open shared object file: No such file or directory
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:166)
at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:239)
at com.sun.jna.Library$Handler.<init>(Library.java:140)
at com.sun.jna.Native.loadLibrary(Native.java:393)
at com.sun.jna.Native.loadLibrary(Native.java:378)
at com.threads.ThreadAffinity.<init>(ThreadAffinity.java:11)
at com.threads.ThreadAffinity.main(ThreadAffinity.java:45)
The current working directory is the root of the project and thats where the so file is located. I also tried modifying the LD_PRELOAD variable to point to my so file; however the error persists.
It works just fine on my OSX where the dylib is located exactly where the so file is currently(project root).
What am I doing wrong?
From the exception:
UnsatisfiedLinkError: Unable to load library 'libctest': liblibctest.so: cannot open shared object file: No such file or directory
It implies you used something like:
public ThreadAffinity() {
ctest = (CTest) Native.loadLibrary("libctest", CTest.class);
}
and not:
public ThreadAffinity() {
ctest = (CTest) Native.loadLibrary("ctest", CTest.class);
}
hence you see the JNA added prefix of lib and postfix of .so added to libctest (liblibctest.so)
LD_PRELOAD is used when you want to prefer one particular version of the same shared library over another, which doesn't apply here.
Define jna.library.path to point to your project root, and JNA should be able to find it.
Also make sure your library has been built as libctest.so and wasn't inadvertently named libctest.dylib.

eclipse : impossible to import git project

I got a problem with my eclipse, on debian.
When I try to import a git project from github, using egit I got a
Couldn't create temporary repository.
error after having set my project properties.
However, I works ok when using running eclipse with sudo.
I think it would be related to wrong permissions somewhere, but cannot figure out where :s
I would appreciate some help.
Thanks by advance !
Considering the source of org.eclipse.egit.ui.internal.clone.SourceBranchPage.java mentions /tmp, it should be related with some permission issue around /tmp.
try {
final URIish uri = newRepoSelection.getURI();
final Repository db = new Repository(new File("/tmp"));
listRemoteOp = new ListRemoteOperation(db, uri);
getContainer().run(true, true, listRemoteOp);
} catch (IOException e) {
transportError(UIText.SourceBranchPage_cannotCreateTemp);
return;
}
The OP jlengrand actually reports in the comments:
The problem was simple in fact, but quite handy to track down:
My .gitconfig file had been corrupted during my debian upgrade, which caused egit to crash.

Resources