imagemin plugin throwing ENOENT error on aws lambda - node.js

I created a lambda function on aws using imangemin and imagemin-optipng nodejs plugin but it is returning me below error:
error: spawn /var/task/node_modules/optipng-bin/vendor/optipng ENOENT
var aws = require('aws-sdk');
var s3 = new aws.S3()
var Imagemin = require('imagemin');
var optipng = require('imagemin-optipng');
exports.handler = function(event, context, callback){
var srcBucket = event.Records[0].s3.bucket.name;
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var params = {Bucket: srcBucket, Key: srcKey};
s3.getObject(params).promise()
.then(data => Imagemin.buffer(data.Body, {
plugins: [
optipng({ optimizationLevel: 7 })
]
})
)
.then(buffer =>
console.log('done ', buffer)
)
.catch(err => callback(err))
};

I just had a similar issue yesterday, on AWS Lambda. In case someone is also facing it, and the development environment is Windows, then I believe this is the solution for you. (note that here in my example I'm using Serverless Framework for building and deploying, however, the principle should work regardless of the use of Serverless)
I tried a few different solutions, but the easiest and fastest solution was to install the Windows Subsystem for Linux and run Serverless Deploy from the Ubuntu terminal on windows.
The issue is that some packages are OS-dependent, meaning that the same package installed on different OSs are going to produce different installations. Therefore your locally build/run works fine because you installed the packages on Windows environment and you are running the packages code on Windows environment, however, when you deploy to AWS it is now running on Amazon Linux and your packages that are OS-dependent (like mozjpeg, jpegtran, etc) are going to fail during the run. So your best shot is to just install the packages, build and deploy your project from a Linux environment (not sure if all Linux distros fit in this statement, but Ubuntu certainly does).
Here's the timeline for what I did:
Install and enable the WSL with Ubuntu (no big deal at all, 10min top, just follow Microsofts doc)
Open the Ubuntu terminal as an administrator (if you don't run as administrator it won't allow you to properly run "npm install" in the next steps)
Make sure everything is updated, just run "apt install upgrade"
Create a folder by running "mkdir your-folder-name" (or just cd directly into your project's original folder, you can do it by Shift+RightClick on the given folder and choosing "Open Linux Shell Here". I preferred to separate it to avoid messing with my original stuff)
Get into the newly created folder by running "cd your-folder-name"
Clone your repository into that folder or just copy/paste it manually (to open your current folder from Ubuntu terminal on Windows just run "explorer.exe .")
Run the good and old "npm install" from the Ubuntu terminal
Now here's a pitfall, if you have your AWS KEYS/SECRETS on your .env file, and you set your serverless.yml file to use the environment variables from the .env file, the next step will fail if you don't have the .env file in place (and you will only see the real error on CloudWatch because the console logged error in the browser will be a CORS error)
Run "Serverless Deploy" to deploy your project
That's it.
Took me around 20min to perform this solution while the others, even though some being effective (like CodeBuild), were more confusing therefore more time-consuming.

try Reinstall optipng-bin module. or node ./node_modules/optipng-bin/lib/install.js

Related

Problem when using process.cwd() in published npm package

I'm dipping my toes into cli tooling by building a simple program for automating Gerrit commits. Everything works locally, but after publishing the package to npm and installing it globally it looks like process.cwd() behaves differently. The program exits, but no console.log(). Even a simple console.log(process.cwd()) is ignored (again works locally). Is it possible to use process.cwd() when running a globally installed npm package?
console.log(process.cwd());
const getCurrentBranchName = (p = process.cwd()) => {
const gitHeadPath = `${p}/.git/HEAD`;
return fs.existsSync(gitHeadPath)
? fs.readFileSync(gitHeadPath, "utf-8").trim().split("/").pop()
: (console.log("not a git repo"), process.exit(0));
}
const currentBranch = getCurrentBranchName();
When ran locally (with node index):
$ /Users/jpap/.npm-packages/lib/node_modules/gittest
$ not a git repo
You haven't proved the problem is with process.cwd()
The code in your question and the results your describe only indicate that the console.log() calls aren't executing.
You could easily test this by adding the following to the top of your code:
console.log('My name is Inigo Montoya. You killed my father. Prepare to die!')
What you are publishing is likely not the same as what you are running locally
For example, you could be using a bundler (e.g. Rollup) configured to strip console.log calls.
This is easy to confirm. Simple look at the code of the npm installed version:
Use npm root -g to find out where your global packages are installed. For non-global packages, look in node_modules.
Find your package's subdir.
Look at the code, or diff it with the source code.
I suspect you will see all console.log statements removed.

Node Executable in AWS Lambda

I am attempting to build an AWS Lambda (Node) function that utilizes the Sentry CLI. So far I have something like:
const CLI = require("#sentry/cli");
const cli = new CLI(null, {
org: '...',
authToken: '...',
});
exports.handler = async (event) => {
const response = await cli.execute(["releases", "list"]);
// ...create a release/deploy/etc...
};
This however fails with:
/var/task/node_modules/#sentry/cli/sentry-cli: cannot execute binary file
There seems to have been a similar issue reported and the suggestion is to change the permission of the executable.
How can I ensure that the permissions on the executable are not stripped when zipping/uploading the function to AWS?
TL;DR
chmod 644 $(find . -type f)
chmod 755 $(find . -type d)
chmod +x ./node_modules/#sentry/cli/sentry-cli // Same command for other binaries as well
// Reupload function code using update-function code using steps below.
Deep dive
This answer is a summary of the steps outlined in the docs, with additional explanations for why they are needed and prerequisites/debugging workflows if anything goes wrong. The docs suggest the following steps for uploading NodeJS projects with additional dependencies as follows. I have designed the steps with an existing AWS Lambda instance already running to help limit the scope of the error when debugging (to AWS or Sentry).
(Recommended) Steps with existing project
1.1 Install Node w/NPM locally (I assume you've done this). Make a note of your local node version and check this matches AWS Lambda instance.
$ node -v
1.2 Install the AWS CLI (must be version 2!).
1.3 Configure the AWS CLI with:
$ aws configure
Note: You can configure this manually as well if you need to with different guides for each platform. I will leave these details out since they are straightforward.
1.4 Try deploying a hello-world Lambda first and see if that works without the sentry-cli package. If it does work, you know sentry is probably the issue and NOT AWS.
1.5 Install Sentry CLI:
$ npm install #sentry/cli
1.6 Automatic sentry-cli configuration:
$ sentry-cli login
1.7 Verify your sentry-cli config is valid with $ sentry-cli info. If not, you need to follow the steps recommended in the console output.
$ sentry-cli info
1.8 Install dependencies using aws-xray-sdk:
$ npm install aws-xray-sdk
1.8.1 (Optional) Navigate to your project root folder. This is just for illustration; the current version of the AWS SDK is pre-installed in Lambda, but you could use this technique to load other pre-built JavaScript packages or if you actually needed an earlier version of the AWS SDK for compatibility reasons (not applicable).
$ npm install --prefix=. aws-sdk
1.8.2 (Sanity Check) Check the permissions of all files in the subfolders of root directory have the executable permissions. Try running the project locally, to see if the executable permission exists:
$ ls -l && node function.js
1.9 Zip the project:
$ zip -r function.zip . // The .zip file must be **less than 50 MB**!
1.10 Upload the function code using the aws command-line tool update-function-code (this is important because this will fix the permissions issue.
$ aws lambda update-function-code --function-name my-function --zip-file fileb://function.zip
1.11 If the operation was successful, you will get an output like the following:
{
"FunctionName": "my-function",
"FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function",
"Runtime": "nodejs12.x",
"Role": "arn:aws:iam::123456789012:role/lambda-role",
"Handler": "index.handler",
"CodeSha256": "Qf0hMc1I2di6YFMi9aXm3JtGTmcDbjniEuiYonYptAk=",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "Active"
},
"RevisionId": "983ed1e3-ca8e-434b-8dc1-7d72ebadd83d",
...
}
1.12 If you get an error with the upload, for example, you can follow the docs here. Check the AWS logs if you need to on AWS CloudWatch.
1.13 Test the running lambda, after you are sure that the update-function-code was successful.
$ aws lambda invoke --function-name my-function --payload '{"key1": "value1", "key2": "value2", "key3": "value3"}' output.txt
Another potential solution (not ideal)
Make the sentry cli executable before you run the CLI config command using child_process.
var exec = require('child_process').exec, child;
child = exec('chmod +x /var/task/node_modules/#sentry/cli/sentry-cli',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
child();
Alternative: You can also try using this package.
Refactoring with Sentry Node NPM package
If in the steps above, you notice that the Sentry CLI is the issue, you can try to refactor your code without this package. Use the Sentry Node NPM package instead since this NPM package was built for NodeJS, and maybe refactor your code. The Sentry Node may be easier to get running but doesn't have functions for deployment/release. From their Usage page:
Sentry's SDK hooks into your runtime environment and automatically reports errors, exceptions, and rejections.
(Note) Sentry with AWS Lambda Docs
The Sentry docs recommend using the #sentry/serverless as a package for integration with AWS Lambda. If you don't want to refactor your code use this guide.
With the AWS Lambda integration enabled, the Node SDK will:
Automatically report all events from your Lambda Functions.
Allows you to modify the transaction sample rate using tracesSampleRate.
Issue reports automatically include:
A link to the cloudwatch logs
Function details
sys.argv for the function
AWS Request ID
Function execution time
Function version
Caveats:
The .zip file must be less than 50 MB. If it's larger than 50 MB, Amazon recommends uploading it to an Amazon Simple Storage Service (Amazon S3) bucket.
The .zip file can't contain libraries written in C or C++. If your .zip file contains C-extension libraries, such as the Pillow (PIL) or numpy libraries, we recommend using the AWS Serverless Application Model (AWS SAM) command line interface (CLI) to build a deployment package.
The .zip file must contain your function's code and any dependencies used to run your function's code (if applicable) on Lambda. If your function depends only on standard libraries, or AWS SDK libraries, you don't need to include these libraries in your .zip file. These libraries are included with the supported Lambda runtime environments.
If any of the libraries use native code, use an Amazon Linux environment to create the deployment package. Also, ensure you package the native code (if you have some) locally on the same platform as the Lambda!
If your deployment package contains native libraries, you can build the deployment package with AWS Serverless Application Model (AWS SAM). You can use the AWS SAM CLI sam build command with the --use-container to create your deployment package. This option builds a deployment package inside a Docker image that is compatible with the Lambda execution environment.

PATH variables empty in electron?

I'm trying to access items in my PATH environment variable from my electron instance, when I run it with npm start while developing it through node.js I get all the expected variables, but when I run the electron application with my resources inside I'm left with only usr/bin
This is how it looks like when I run it from npm:
And this is how it looks when run from the electron mac application precompiled:
Does anyone know why this could be the case? And if I can do anything to reach my normal PATH variables
UPDATE:
After a lot of research I found out that GUI applications ran from finder or docker in Mac OSX use different environment variables compared to if they are ran from the terminal:
This can be edited through plist files, either globally or application specific
You can use fix-path package. Works perfectly!
const fixPath = require('fix-path');
console.log(process.env.PATH);
//=> '/usr/bin'
fixPath();
console.log(process.env.PATH);
//=> '/usr/local/bin:/usr/bin...'

serverless - node.js crypto package is not working

Trying to generate RSA keys with crypto package and deploy it on AWS Lambda, I get an error that crypto package is undefined. Are there easy ways to deploy this package to Lambda without having building docker containers?
Yes, I read that node.js native packages have different binaries on mac (my current os) and linux, so there is an approach to build docker and deploy it, but I found it's not very clear for me, so if this is the only way to do it, maybe there are good resources to read about it also.
Thanks!
I tried to avoid docker as well but it's actually pretty easy to setup. Install the Community Edition
Pull this image with this:
docker pull lambci/lambda
To mount your dev folder run this:
docker run -v ~/[mydev-folder]:/var/task lambci/lambda:nodejs8.10
Open Kitematic from the Docker app. You should see the container you pulled. Select it and start it if it's not started. Then click "Exec" and you should get a bash prompt opened in /var/task which should be pointing at your dev folder.
I usually delete node_modules and then run npm install from inside the docker container. I also sls deploy from there was well.
You need to import the package as require("crypto"). It is just not defined on the global object.
const handler = () => {
console.log(crypto); // undefined
console.log(global.crypto); // undefined
console.log(require("crypto"); // Bingo! :D
}
If you arrived here because you are bundling a Nodejs lambda with rollup and using a version of uuid 7+, then you need to add
external: ["crypto"]
to your rollup.config.js so that rollup does not attempt to replace the require statement by whatever if finds better.

Mocha test that runs locally won't run on Jenkins (environment issue)

I'm currently trying to implement CI with Jenkins for an emberjs node project that uses mocha for unit testing. I'm running Jenkins on an Amazon EC2 server.
When I run mocha locally (both on my desktop, AND on the ec2 server) I get this:
./node_modules/mocha/bin/mocha
Initializing server on port 8090
Unit Test for /test
test API call incoming
key res value is: test!
✓ gives a json object with res: test!
1 passing (35ms)
However, when I have jenkins set up to run this same command:
01:44:46 + ./node_modules/mocha/bin/mocha
01:44:46
01:44:46 /var/lib/jenkins/workspace/Rekindle2_Node/server/routes/test/getTest.js:4
01:44:46 const getTest = (req, res) => {
01:44:46 ^
01:44:46 SyntaxError: Unexpected token >
etc
I've double checked, the package.json has everything I need, and I know that I don't have anything globally installed that changes things (as I managed to git clone, npm install, and run mocha from the ec2 instance). All I know is that jenkins sometimes can have issues with consuming environment variables from the server it's running on? Does anyone know what the issue I'm encountering could be? I've tried uninstalling and reinstalling node as well, it could be that Jenkins is looking at an older installation of node while the ec2 is not. Is there any way to tell this? How do I look at the environment variables of a specific build?
That looks like Jenkins is using an older version of node.js. To confirm this I would add logging to your Jenkins job. node --version will print the version of node.js that Jenkins is aware of. Also if you are interested in the environment there is a view in the Jenkins web ui to show the environment from a particular build. Or you can add env as a line in your Jenkins build script to print out the current environment variables. If it turns out to be a node.js version issue, I would look into using a plugin to manage your nodejs versions: node.js Jenkins plugin

Resources