I am attempting to build an AWS Lambda (Node) function that utilizes the Sentry CLI. So far I have something like:
const CLI = require("#sentry/cli");
const cli = new CLI(null, {
org: '...',
authToken: '...',
});
exports.handler = async (event) => {
const response = await cli.execute(["releases", "list"]);
// ...create a release/deploy/etc...
};
This however fails with:
/var/task/node_modules/#sentry/cli/sentry-cli: cannot execute binary file
There seems to have been a similar issue reported and the suggestion is to change the permission of the executable.
How can I ensure that the permissions on the executable are not stripped when zipping/uploading the function to AWS?
TL;DR
chmod 644 $(find . -type f)
chmod 755 $(find . -type d)
chmod +x ./node_modules/#sentry/cli/sentry-cli // Same command for other binaries as well
// Reupload function code using update-function code using steps below.
Deep dive
This answer is a summary of the steps outlined in the docs, with additional explanations for why they are needed and prerequisites/debugging workflows if anything goes wrong. The docs suggest the following steps for uploading NodeJS projects with additional dependencies as follows. I have designed the steps with an existing AWS Lambda instance already running to help limit the scope of the error when debugging (to AWS or Sentry).
(Recommended) Steps with existing project
1.1 Install Node w/NPM locally (I assume you've done this). Make a note of your local node version and check this matches AWS Lambda instance.
$ node -v
1.2 Install the AWS CLI (must be version 2!).
1.3 Configure the AWS CLI with:
$ aws configure
Note: You can configure this manually as well if you need to with different guides for each platform. I will leave these details out since they are straightforward.
1.4 Try deploying a hello-world Lambda first and see if that works without the sentry-cli package. If it does work, you know sentry is probably the issue and NOT AWS.
1.5 Install Sentry CLI:
$ npm install #sentry/cli
1.6 Automatic sentry-cli configuration:
$ sentry-cli login
1.7 Verify your sentry-cli config is valid with $ sentry-cli info. If not, you need to follow the steps recommended in the console output.
$ sentry-cli info
1.8 Install dependencies using aws-xray-sdk:
$ npm install aws-xray-sdk
1.8.1 (Optional) Navigate to your project root folder. This is just for illustration; the current version of the AWS SDK is pre-installed in Lambda, but you could use this technique to load other pre-built JavaScript packages or if you actually needed an earlier version of the AWS SDK for compatibility reasons (not applicable).
$ npm install --prefix=. aws-sdk
1.8.2 (Sanity Check) Check the permissions of all files in the subfolders of root directory have the executable permissions. Try running the project locally, to see if the executable permission exists:
$ ls -l && node function.js
1.9 Zip the project:
$ zip -r function.zip . // The .zip file must be **less than 50 MB**!
1.10 Upload the function code using the aws command-line tool update-function-code (this is important because this will fix the permissions issue.
$ aws lambda update-function-code --function-name my-function --zip-file fileb://function.zip
1.11 If the operation was successful, you will get an output like the following:
{
"FunctionName": "my-function",
"FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function",
"Runtime": "nodejs12.x",
"Role": "arn:aws:iam::123456789012:role/lambda-role",
"Handler": "index.handler",
"CodeSha256": "Qf0hMc1I2di6YFMi9aXm3JtGTmcDbjniEuiYonYptAk=",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "Active"
},
"RevisionId": "983ed1e3-ca8e-434b-8dc1-7d72ebadd83d",
...
}
1.12 If you get an error with the upload, for example, you can follow the docs here. Check the AWS logs if you need to on AWS CloudWatch.
1.13 Test the running lambda, after you are sure that the update-function-code was successful.
$ aws lambda invoke --function-name my-function --payload '{"key1": "value1", "key2": "value2", "key3": "value3"}' output.txt
Another potential solution (not ideal)
Make the sentry cli executable before you run the CLI config command using child_process.
var exec = require('child_process').exec, child;
child = exec('chmod +x /var/task/node_modules/#sentry/cli/sentry-cli',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
child();
Alternative: You can also try using this package.
Refactoring with Sentry Node NPM package
If in the steps above, you notice that the Sentry CLI is the issue, you can try to refactor your code without this package. Use the Sentry Node NPM package instead since this NPM package was built for NodeJS, and maybe refactor your code. The Sentry Node may be easier to get running but doesn't have functions for deployment/release. From their Usage page:
Sentry's SDK hooks into your runtime environment and automatically reports errors, exceptions, and rejections.
(Note) Sentry with AWS Lambda Docs
The Sentry docs recommend using the #sentry/serverless as a package for integration with AWS Lambda. If you don't want to refactor your code use this guide.
With the AWS Lambda integration enabled, the Node SDK will:
Automatically report all events from your Lambda Functions.
Allows you to modify the transaction sample rate using tracesSampleRate.
Issue reports automatically include:
A link to the cloudwatch logs
Function details
sys.argv for the function
AWS Request ID
Function execution time
Function version
Caveats:
The .zip file must be less than 50 MB. If it's larger than 50 MB, Amazon recommends uploading it to an Amazon Simple Storage Service (Amazon S3) bucket.
The .zip file can't contain libraries written in C or C++. If your .zip file contains C-extension libraries, such as the Pillow (PIL) or numpy libraries, we recommend using the AWS Serverless Application Model (AWS SAM) command line interface (CLI) to build a deployment package.
The .zip file must contain your function's code and any dependencies used to run your function's code (if applicable) on Lambda. If your function depends only on standard libraries, or AWS SDK libraries, you don't need to include these libraries in your .zip file. These libraries are included with the supported Lambda runtime environments.
If any of the libraries use native code, use an Amazon Linux environment to create the deployment package. Also, ensure you package the native code (if you have some) locally on the same platform as the Lambda!
If your deployment package contains native libraries, you can build the deployment package with AWS Serverless Application Model (AWS SAM). You can use the AWS SAM CLI sam build command with the --use-container to create your deployment package. This option builds a deployment package inside a Docker image that is compatible with the Lambda execution environment.
Related
I reinstalled multiple times NPM and Node on my pc.
(npm version 7.4.3)
(node version v15.7.0)
I followed the procedure for configuring the Firebase CLI with:
npm install -g firebase-tools
and firebase init and firebase deploy and the configuration seems to work fine.
The problem I'm facing happens when I open the index.js file and I uncomment the stock helloWorld function which looks like this:
exports.helloWorld = functions.https.onRequest((request, response) => {
functions.logger.info("Hello logs!", {structuredData: true});
response.send("Hello from Firebase!");
});
I run firebase deploy and I receive this error
functions[helloWorld(us-central1)]: Deployment error.
Build failed: Build error details not available. Please check the logs at https://console. {urlStuff}
Functions deploy had errors with the following functions:
helloWorld
To try redeploying those functions, run:
firebase deploy --only "functions:helloWorld"
To continue deploying other features (such as database), run:
firebase deploy --except functions
Error: Functions did not deploy properly.
I honestly don't know what to do now.
I tried multiple times to re install node and npm and re doing the Firebase CLI procedure but nothing seems to solve this problem, I still receive this Error when deploying.
The log error I receive is this :
textPayload: "ERROR: error fetching storage source: generic::unknown: retry budget exhausted (3 attempts): fetching gcs source: unpacking source from gcs: source fetch container exited with non-zero status: 1"
As suggested by this link provided by #Muthu Thavamani :
GCP Cloud Function - ERROR fetching storage source during build/deploy
Firebase CLI uses NodeJS version 12 while on my device I had version 15 installed.
Just use this guide to downgrade your version of NodeJS and everything works fine.
For me, it was because I was using an older version of Firebase CLI.
So I ran the upgrade command as suggested, and it worked.
sudo npm i -g firebase-tools
(My Node version is v15.6.0)
I had a similar problem and wasn't solved by changing node version. What I had to do is actually enter Container Repos and delete both worker & cache images. Then I got it running (using node v12.22.1 and npm v6.14.12).
It's much easier to find and fix issue by examining the actual logs by using this command to open the log
firebase functions:log
The specific issue will be visible there. I sometimes had error as simple as a missing packages in package.json
I wish they could show better info on the errors directly. but at least we can find them here.
A few days back we received a notification regarding 'Lambda operational notification' to update our Node.js 8.10 runtime to Node.js 10.x runtime.
In response to this notification, we installed Node.js version v10.16.3 in our development system and tested our existing code.
We found the code was running fine in our development system, but when we tested this same code in AWS Lambda with Node.js 10.x runtime we get this following error:
2019-10-28T12:03:31.771Z 8e2472b4-a838-4ede-bc70-a53aa41d9b79 INFO Error: Server terminated early with status 127
at earlyTermination.catch.e (/var/task/node_modules/selenium-webdriver/remote/index.js:251:52)
at process._tickCallback (internal/process/next_tick.js:68:7)
'aws-sdk', 'selenium-webdriver' npm packages and google chrome binaries are the only dependencies used in our project.
Our project has the following file structure.
/var/task/
├── index.js
├── lib
│ ├── chrome
│ ├── chromedriver
│ ├── libgconf-2.so.4
│ ├── libORBit-2.so.0
│ └── libosmesa.so
└── node_modules
├── selenium-webdriver
├── ...
Since this code is not throwing any error in our development system, we suspect it has to do with the new runtime.
We tried the setting the binary path using setChromeBinaryPath()
This is the code we are using. The error occurs when the build() method is called.
var webdriver = require('selenium-webdriver');
var chrome = require('selenium-webdriver/chrome');
var builder = new webdriver.Builder().forBrowser('chrome');
var chromeOptions = new chrome.Options();
const defaultChromeFlags = [
'--headless',
'--disable-gpu',
'--window-size=1280x1696', // Letter size
'--no-sandbox',
'--user-data-dir=/tmp/user-data',
'--hide-scrollbars',
'--enable-logging',
'--log-level=0',
'--v=99',
'--single-process',
'--data-path=/tmp/data-path',
'--ignore-certificate-errors',
'--homedir=/tmp',
'--disk-cache-dir=/tmp/cache-dir'
];
chromeOptions.setChromeBinaryPath("/var/task/lib/chrome");
chromeOptions.addArguments(defaultChromeFlags);
builder.setChromeOptions(chromeOptions);
var driver = await builder.build();
We recently faced the exact same issue. After upgrading to AWS Lambda Node v10.x from Node v8.x, chrome and chromedriver stopped working. In short, the root cause is that Lambda Node 10.x runs on Amazon Linux 2 Vs Lambda Node v8 which runs on Amazon Linux. Amazon Linux 2 lacks a number of packages comparing to it's predecessor, making it more lightweight but at the same time a pain in case you want to set up a custom runtime environment. Before I give you the steps to resolve this, let me first highlight a few useful links that helped me find the right set of binaries I had to also include in my lambda deployment package.
Just remember! The way to resolve this is to figure out which binaries are missing from your Lambda deployment package and add them in.
How to use Amazon Linux native binary packages in an AWS Lambda deployment package. Once you know you are missing some binaries in your Lambda environment, this link from AWS will help you include them into your package. For my purpose I used an EC2 Amazon Linux 64 bit AMI to download the packages and extract them. Detailed steps follow... https://aws.amazon.com/premiumsupport/knowledge-center/lambda-linux-binary-package
Besides binaries missing from Amazon Linux 2, there are also no fonts installed. This link will tell you how to install fonts on AWS Lamda. One of the reasons Chrome fails to run on Lambda is the lack of fonts.
https://forums.aws.amazon.com/thread.jspa?messageID=776307
This is a nice issue thread on github that taught me that the order of paths in the LD_LIBRARY_PATH environment variable matters. This is the environment variable that holds the paths where your binaries are in. https://github.com/alixaxel/chrome-aws-lambda/issues/37
Now this is a game changer. Without the amazing docker container lambci created simulating AWS Lambda to as close as it can be, I would have never figured it out. After trying all sorts of things between an Amazon Linux 2 EC2 server and AWS Lambda, this ended up being my playground, where I could iterate trying different packages very quickly. https://hub.docker.com/r/lambci/lambda/
Running Arbitrary Executables in AWS Lambda. Some helpful link if you want to run an executable directly on lambda and see how it behaves. The error messages you see from selenium-webdriver package are actually not surfacing the real error that chrome or chromedriver throws. Trying to directly run chrome or chromedriver in the lambci docker container is how I managed to debug this and figure out which binaries were missing. https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/
So, here is what you need to do:
Launch an Amazon Linux 2 64 bit server. A t3.micro should be enough.
SSH to the machine and install rmpdevtools: sudo yum install -y yum-utils rpmdevtools
Create a temporary directory for downloading the missing packages:
cd /tmp
mkdir lib
cd lib
Download the RPM packages missing in AWS Lambda node v10.x: yumdownloader --resolve GConf2 glibc glib2 libblkid libffi libgcc libmount libsepol libstdc++ libuuid pcre zlib libselinux dbus-glib mozjs17 polkit polkit-pkla-compat libX11 libX11-common libXau libxcb fontconfig expat fontpackages-filesystem freetype stix-fonts gnu-free-sans-fonts fontpackages-filesystem gnu-free-fonts-common nss nspr nss-softokn nss-softokn-freebl nss-util dbus-libs audit-libs bzip2-libs cracklib elfutils-libelf elfutils-libs libattr libcap libcap-ng libcrypt libdb libgcc libgcrypt libgpg-error libsepol lz4 pam systemd-libs xz-libs mesa-libOSMesa-devel mesa-libOSMesa mesa-libglapi sqlite
Extract the RPM packages: rpmdev-extract *rpm
Create some temporary location for copying the binaries from the extracted RPM artifacts:
sudo mkdir -p /var/task
sudo chown ec2-user:ec2-user /var/task
cd /var/task
mkdir lib
mkdir fonts
Copy the extracted binaries to the new temporary location:
/bin/cp /tmp/lib/*/usr/lib64/* /var/task/lib
/bin/cp /tmp/lib/*/lib64/* /var/task/lib
/bin/cp /tmp/lib/*/usr/share/fonts/*/*.ttf /var/task/fonts
Zip the artifacts: zip -r ./lib.zip ./*
Download them from the server extract the zip and include your lambda handler. At this point you should have a very similar structure like the one you had with some more binaries in your lib folder and a new fonts folder.
Include the following config file "fonts.conf" in your /var/task/fonts folder:
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/fonts/</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>
Add the following code snippet in your lambda handler. This will set the right order of include paths for the LD_LIBRARY_PATH environment variable and will also set the FONTCONFIG_PATH to the new /var/task/fonts directory.
process.env.FONTCONFIG_PATH = `${process.env.LAMBDA_TASK_ROOT}/fonts`;
if (process.env.LD_LIBRARY_PATH.startsWith("/var/task/lib:") !== true) {
process.env.LD_LIBRARY_PATH = [...new Set(["/var/task/lib", ...process.env.LD_LIBRARY_PATH.split(':')])].join(':');
}
Download locally lambci/lambda image.
docker pull lambci/lambda
Debug your lamda handler by running a lambci image like this:
docker run --rm -v "$THE_LOCAL_DIR_OF_YOUR_UNCOMPRESSED_LAMDA_PACKAGE":/var/task lambci/lambda:nodejs10.x index.handler
Iterate steps 7 to 14 until you get it working on the lambci container. With the given RPM packages it should work, but in case it does not, you can debug locally what is going on by trying to launch chrome in your lambda like this:
const childProcess = require('child_process');
childProcess.execFileSync(`${process.env.LAMBDA_TASK_ROOT}/lib/chrome`);
This is a cumbersome process, but at the end of the day, all you are doing is just adding some more binaries into your package and 3 lines of code in your handler to update the lib and fonts environment variables.
Just in case, adding below as well the chrome flags we are using:
const defaultChromeFlags = [
"--headless",
"--disable-gpu",
"--window-size=1280x1024",
"--no-sandbox",
"--user-data-dir=/tmp/user-data",
"--hide-scrollbars",
"--enable-logging",
"--v=99",
"--single-process",
"--data-path=/tmp/data-path",
"--ignore-certificate-errors",
"--homedir=/tmp",
"--disk-cache-dir=/tmp/cache-dir"
];
Good luck!
I've been trying to get node gm to work on aws lambda.
I installed the imagemagick and graphicmagick libraries on an EC2 instance created from a lambda execution environment. I pointed the appPath to these libraries from gm.
I still get the following error -
Error: Could not execute GraphicsMagick/ImageMagick: /var/task/graphicsmagick/bin/identify "-ping" "-format" "%wx%h" "./resultant-file.jpg" this most likely means the gm/convert binaries can't be found
Can anyone suggest me the right folder structure for such an app or any pointers as to where I'm going wrong?
I created a lambda function on aws using imangemin and imagemin-optipng nodejs plugin but it is returning me below error:
error: spawn /var/task/node_modules/optipng-bin/vendor/optipng ENOENT
var aws = require('aws-sdk');
var s3 = new aws.S3()
var Imagemin = require('imagemin');
var optipng = require('imagemin-optipng');
exports.handler = function(event, context, callback){
var srcBucket = event.Records[0].s3.bucket.name;
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var params = {Bucket: srcBucket, Key: srcKey};
s3.getObject(params).promise()
.then(data => Imagemin.buffer(data.Body, {
plugins: [
optipng({ optimizationLevel: 7 })
]
})
)
.then(buffer =>
console.log('done ', buffer)
)
.catch(err => callback(err))
};
I just had a similar issue yesterday, on AWS Lambda. In case someone is also facing it, and the development environment is Windows, then I believe this is the solution for you. (note that here in my example I'm using Serverless Framework for building and deploying, however, the principle should work regardless of the use of Serverless)
I tried a few different solutions, but the easiest and fastest solution was to install the Windows Subsystem for Linux and run Serverless Deploy from the Ubuntu terminal on windows.
The issue is that some packages are OS-dependent, meaning that the same package installed on different OSs are going to produce different installations. Therefore your locally build/run works fine because you installed the packages on Windows environment and you are running the packages code on Windows environment, however, when you deploy to AWS it is now running on Amazon Linux and your packages that are OS-dependent (like mozjpeg, jpegtran, etc) are going to fail during the run. So your best shot is to just install the packages, build and deploy your project from a Linux environment (not sure if all Linux distros fit in this statement, but Ubuntu certainly does).
Here's the timeline for what I did:
Install and enable the WSL with Ubuntu (no big deal at all, 10min top, just follow Microsofts doc)
Open the Ubuntu terminal as an administrator (if you don't run as administrator it won't allow you to properly run "npm install" in the next steps)
Make sure everything is updated, just run "apt install upgrade"
Create a folder by running "mkdir your-folder-name" (or just cd directly into your project's original folder, you can do it by Shift+RightClick on the given folder and choosing "Open Linux Shell Here". I preferred to separate it to avoid messing with my original stuff)
Get into the newly created folder by running "cd your-folder-name"
Clone your repository into that folder or just copy/paste it manually (to open your current folder from Ubuntu terminal on Windows just run "explorer.exe .")
Run the good and old "npm install" from the Ubuntu terminal
Now here's a pitfall, if you have your AWS KEYS/SECRETS on your .env file, and you set your serverless.yml file to use the environment variables from the .env file, the next step will fail if you don't have the .env file in place (and you will only see the real error on CloudWatch because the console logged error in the browser will be a CORS error)
Run "Serverless Deploy" to deploy your project
That's it.
Took me around 20min to perform this solution while the others, even though some being effective (like CodeBuild), were more confusing therefore more time-consuming.
try Reinstall optipng-bin module. or node ./node_modules/optipng-bin/lib/install.js
Trying to generate RSA keys with crypto package and deploy it on AWS Lambda, I get an error that crypto package is undefined. Are there easy ways to deploy this package to Lambda without having building docker containers?
Yes, I read that node.js native packages have different binaries on mac (my current os) and linux, so there is an approach to build docker and deploy it, but I found it's not very clear for me, so if this is the only way to do it, maybe there are good resources to read about it also.
Thanks!
I tried to avoid docker as well but it's actually pretty easy to setup. Install the Community Edition
Pull this image with this:
docker pull lambci/lambda
To mount your dev folder run this:
docker run -v ~/[mydev-folder]:/var/task lambci/lambda:nodejs8.10
Open Kitematic from the Docker app. You should see the container you pulled. Select it and start it if it's not started. Then click "Exec" and you should get a bash prompt opened in /var/task which should be pointing at your dev folder.
I usually delete node_modules and then run npm install from inside the docker container. I also sls deploy from there was well.
You need to import the package as require("crypto"). It is just not defined on the global object.
const handler = () => {
console.log(crypto); // undefined
console.log(global.crypto); // undefined
console.log(require("crypto"); // Bingo! :D
}
If you arrived here because you are bundling a Nodejs lambda with rollup and using a version of uuid 7+, then you need to add
external: ["crypto"]
to your rollup.config.js so that rollup does not attempt to replace the require statement by whatever if finds better.