ffmpeg - ffmpeg aws lamnda function error - node.js

I am getting error
Execution result: failed(logs)
Details
The area below shows the result returned by your function execution. Learn more about returning results from your function.
{
"errorMessage": "RequestId: eb7906af-f46d-11e8-ae3b-45487c02a68e Process exited before completing request"
}
Summary
Code SHA-256
ca50xloHl4xLOSWox2xidHxC1VHyNqwq3kECKraw7/c=
Request ID
eb7906af-f46d-11e8-ae3b-45487c02a68e
Duration
38.73 ms
Billed duration
100 ms
Resources configured
128 MB
Max memory used
19 MB
Log output
The section below shows the logging calls in your code. These correspond to a single row within the CloudWatch log group corresponding to this Lambda function. Click here to view the CloudWatch log group.
START RequestId: eb7906af-f46d-11e8-ae3b-45487c02a68e Version: $LATEST
2018-11-30T07:02:38.509Z eb7906af-f46d-11e8-ae3b-45487c02a68e TypeError: Cannot create property 'stack' on string 'Could not find ffmpeg executable, tried "/var/task/node_modules/#ffmpeg-installer/linux-x64/ffmpeg" and "/var/task/node_modules/#ffmpeg-installer/ffmpeg/node_modules/#ffmpeg-installer/linux-x64/ffmpeg"'
END RequestId: eb7906af-f46d-11e8-ae3b-45487c02a68e
REPORT RequestId: eb7906af-f46d-11e8-ae3b-45487c02a68e Duration: 38.73 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 19 MB
RequestId: eb7906af-f46d-11e8-ae3b-45487c02a68e Process exited before completing request

I had a kinda same issue with EC2, try this package.
Install the package
npm install --save #ffmpeg-installer/ffmpeg
In your code, use it as below
const ffmpegPath = require('#ffmpeg-installer/ffmpeg').path;
const ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);

Related

aws-serverless-express connection error - EPIPE

We have a NodeJS 10.16.3 Express API. We've recently switched from AWS Elastic Beanstalk/EC2 to Lambda / Serverless. Our DB is Postgres (PostgreSQL) 12.2.
It seemed all of a sudden I started getting this error on my local server when making requests from the client:
offline: ANY /dev/inventory/inventory (λ: app)
ERROR: aws-serverless-express connection error
{ Error: write EPIPE
at WriteWrap.afterWrite (net.js:788:14) errno: 'EPIPE', code: 'EPIPE', syscall: 'write' }
offline: (λ: app) RequestId: ckazracm0001emds69068drtu Duration: 2.58 ms Billed Duration: 100 ms
I can't seem to find much on this issue and I'm hoping someone can help.
Notes:
My local postgres is running
My .env.json file is correct
Found the error. Hopefully this will be helpful for others.
The error was due to exceeding the max cookie size of 4096 bytes in the application request headers. We solved it by stripping out erroneous cookies that were passed through some 3rd party services we were using, which has long encoded strings for cookie values.

Failed to upload report - HTTP code 413: The page was not displayed because the request entity is too large

I'm setting up a fresh installation of SonarQube Version 7.9.1 (build 27448)
It's running behind a reverse-proxy in IIS using URL Rewrite.
I can login just fine into SonarQube.
It is failing when running analysis for my .Net solution (a big one) using sonar-scanner-msbuild-4.7.1.2311-net46. It failes at the end, after compilation and analysis happened.
INFO: Analysis report generated in 33994ms, dir size=196 MB
INFO: Analysis report compressed in 191901ms, zip size=71 MB
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:46:46.636s
INFO: Final Memory: 20M/746M
ERROR: Error during SonarQube Scanner execution
INFO: ------------------------------------------------------------------------
ERROR: Failed to upload report - HTTP code 413: The page was not displayed because the request entity is too large.
ERROR:
The SonarQube Scanner did not complete successfully
00:52:36.207 Post-processing failed. Exit code: 1
Googling HTTP 413 found some solutions to enable <serverRuntime> and set uploadReadAheadSize to it's max (see below).
This did not fix my problem.
I Figured it out.
It was the request filtering feature. I had to up the Maximum allowed content length (Bytes)
See Large File Upload in IIS

Is dynamodb-geo supported in lambda running Node?

I'm trying to create a simple application following this article, but I cannot get my Node Lambda function to find the dynamodb-geo package.
Here is what I have:
const AWS = require('aws-sdk');
const ddbGeo = require('dynamodb-geo');
exports.handler = async (event, context) => {
// Rest of the code here
};
And the error Lambda throws is:
START RequestId: 5d40d132-040f-447d-bd76-35c4cec0236a Version: $LATEST
2019-10-05T10:04:24.719Z undefined ERROR Uncaught
Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error:
Cannot find module
'dynamodb-geo'","stack":["Runtime.ImportModuleError: Error: Cannot
find module 'dynamodb-geo'"," at _loadUserApp
(/var/runtime/UserFunction.js:100:13)"," at
Object.module.exports.load (/var/runtime/UserFunction.js:140:17)","
at Object. (/var/runtime/index.js:45:30)"," at
Module._compile (internal/modules/cjs/loader.js:778:30)"," at
Object.Module._extensions..js
(internal/modules/cjs/loader.js:789:10)"," at Module.load
(internal/modules/cjs/loader.js:653:32)"," at tryModuleLoad
(internal/modules/cjs/loader.js:593:12)"," at Function.Module._load
(internal/modules/cjs/loader.js:585:3)"," at
Function.Module.runMain (internal/modules/cjs/loader.js:831:12)","
at startup (internal/bootstrap/node.js:283:19)"]} END RequestId:
5d40d132-040f-447d-bd76-35c4cec0236a REPORT RequestId:
5d40d132-040f-447d-bd76-35c4cec0236a Duration: 1146.75 ms Billed
Duration: 1200 ms Memory Size: 512 MB Max Memory Used: 35 MB Unknown
application error occurred Runtime.ImportModuleError
Any clue on what could be happening?
The only included package on AWS Lambda is the aws-sdk package. Everything else (except standard node packages) needs to be packaged and uploaded with your code.
There are many tools to achieve this:
the AWS cli (see https://stackoverflow.com/questions/34437900...)
the Serverless framework
AWS Amplify
AWS CDK.
Have you installed the package?
Using npm or yarn: npm install --save dynamodb-geo or yarn add dynamodb-geo.
Doc: https://www.npmjs.com/package/dynamodb-geo
You can use a Lambda layer in order to easily import external packages:

Error: spawn EACCES on AWS Lambda using html-to-pdf package

I'm using html-pdf and trying to convert html to pdf on AWS Lambda using node js, but I get the error Error: spawn EACCES" message:
START RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Version: $LATEST
2019-06-07T20:44:44.824Z 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 ************** start
2019-06-07T20:44:45.025Z 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Error: spawn EACCES
at _errnoException (util.js:1022:11)
at ChildProcess.spawn (internal/child_process.js:323:11)
at Object.exports.spawn (child_process.js:502:9)
at PDF.PdfExec [as exec] (/var/task/node_modules/html-pdf/lib/pdf.js:87:28)
at PDF.PdfToBuffer [as toBuffer] (/var/task/node_modules/html-pdf/lib/pdf.js:44:8)
at exports.handler (/var/task/index.js:17:35)
END RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46
REPORT RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Duration: 345.46 ms Billed Duration: 400 ms Memory Size: 128 MB Max Memory Used: 39 MB
RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Process exited before completing request
Couple ideas:
How do you package and upload your code?
lambda requires the files to have read access for all users, particularly "other", if this is missing you will receive a non-obvious error when trying to call the function. The fix is simple enough, perform a 'chmod a+r *' before creating your zip file. If the code is visible in the inline editor adding an empty line and saving will also fix the problem, presumably by overwriting the file with the correct permissions.
Where are you saving the converted file / are you using lambda tmp directory? Might be a wrong path
Lambda timeout doesn't allow enough time to execute your function. Less possible, but due to mentioning process exited before completion I would double check. Check timeout settings on your function

Emulator is not running after AOSP Ubuntu 14.04

After building successfully by ~$ make -j , if I run ~$emulator it shows below problem.
I am using Ubuntu 14.04.
sh: 1: glxinfo: not found emulator: WARNING: system partition size
adjusted to match image file (1792 MB > 200 MB)
emulator: WARNING: data partition size adjusted to match image file
(550 MB > 200 MB)
sh: 1: glxinfo: not found emulator: WARNING: encryption is off X Error
of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 149 () Minor opcode of failed
request: 2 Serial number of failed request: 35 Current serial
number in output stream: 36 QObject::~QObject: Timers cannot be
stopped from another thread

Resources