A while ago AWS announced that they had released nodejs18 runtime env support for serverless lambda and lambda#edge services.
I tried to upgrade the whole project to node18.x.
There was first some issue with sam which didn't recognize runtime nodejs18.x.
After upgrading the sam-cli to the latest version. It sounds like it looks for some emulation environment which is not out there. even in amazon ecr public registry.
our service prominently depends on integration testing which uses sam local invoke in it.
and sam local invoke returns the following:
Building image.....................
Failed to build Docker Image
NoneType: None
Error: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
Does anyone know any workarounds like a custom dockerfile or similar stuff?
update:
sounds like there is a nodejs18.x emulation for sam. but still get the same error on sam local invoke.
update2:
found the issue. I was on a macOS machine and it turned out that I need to declare architecture inside my template file like the following under properties section of serverless lambda conf:
Type: 'AWS::Serverless::Function'
Properties:
Architectures:
- arm64
Related
I have a nodejs application that is deployed on Amazon ECS. I have setup codepipeline to automatically build and deploy the application. The application is dockerized and is deployed on a ubuntu machine. The application is working fine. However, there is a requirement to run a shell script from within the application. I am calling the shell script using await exec(path/to/shellscript). However, I keep getting the following error:
2021-02-08 17:19:48FAILED: undefined, 4e9d8424-3cfd-4f35-93cb-fac886b1c4918fc9f680cfea45ec813db787f8b8380a
2021-02-08 17:19:48/bin/sh: src/myApp/myScript.bash: not found
I have tried giving it permission using chmod but I keep getting errors still.
Any help is appreciated.
I realized that the script that I am running calls an executable created to run on Ubuntu. However, the docker image is an alpine Linux distribution. The issue has been explained in detail here:
Why can't I run a C program built on alpine on ubuntu?
Aside from the obvious "no such file or directory" which is true...there is in fact no aws file in that location, this is coming up after attempting the final installation step of the V2 AWS-CLI installation routine found here. (aws --version)
Now, I have two Ubuntu systems side by side. I ran the same install on both and one succeeded, but the other did not. On the one that succeeded, there also is no AWS file in the path suggested by the error.
Furthermore, on both systems, the folder structure and symlinks of the installation appear to be identical. I'm using the root user on both, and comparing the file permissions on the system that failed with the one that works yields identical setups.
I suspect that AWS has been setup to point to the wrong path? Both $PATH environments are also identical on each machine.
The ONLY difference between the two machines is that one is Ubuntu 18.04 and the other is 20.04.
Any ideas what would be different about that and why I am unable to run AWS commands on this stubborn box?
Short answer:
run hash aws on shell
Details:
awscli v1 points to /usr/bin/aws.
awscliv2 points to /usr/local/bin/aws
On uninstalling awscliv1 and installing awscliv2, aws was still pointing to /usr/bin/aws, while which aws resulted in /usr/local/bin/aws.
Seems bash has cached the path /usr/bin/aws for aws executable.
$ which aws
/usr/local/bin/aws
$ aws
-bash: /usr/bin/aws: No such file or directory
So running any aws command would look for /usr/bin/aws (non-existing)
-bash: /usr/bin/aws: No such file or directory
hash aws clears this cache. After this, firing aws commands uses the correct path
$ aws --version
aws-cli/2.2.32 Python/3.8.8 Linux/5.4.0-77-generic exe/x86_64.ubuntu.18 prompt/off
Follow these steps from [https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html]:
In your browser, download the macOS pkg file:
[https://awscli.amazonaws.com/AWSCLIV2.pkg]
Run your downloaded file and follow the on-screen instructions.
To verify
$ which aws
/usr/local/bin/aws
$ aws --version
aws-cli/2.7.24 Python/3.8.8 Darwin/18.7.0 botocore/2.4.5
I am trying to setup AWS IoT esp32 msys2, I followed the instructions but cannot solve the problem to setup the policy, certificates using the following command:
python SetupAWS.py setup
But I get this error message, I have installed and setup AWS configuration
AWS not configured. Please run `aws configure`.
All the paths seem to be connection, but cannot work out what could be the problem.
To fix the problem, I deleted everything and re-installed the complete developed suite including the AWS aws-cli tools.
I'm setting up an elastic beanstalk for a node server. When running the 'eb create' command I'm getting the following error:
ERROR: NotFoundError - Platform Node.js running on 64bit Amazon Linux does not appear to be valid
I can't find much about it online and the node.js running on 64bit is the way my instance is set so I'm not sure why it is marked as invalid.
(From my comment): This sounds like a bug in the EBCLI. The EBCLI somehow overwrote the platform name in the .config.yml file in an incorrect form.
You can easily fix this by replacing the default_platform field in the .elasticbeanstalk/config.yml file with "node.js", or "64bit Amazon Linux 2017.09 v4.4.5 running Node.js".
I'm building a nodejs app from OSX and trying to deploy it to kubernetes using rules_k8s.
It almost works, except that the node doesn't start because of this error:
/app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary: line 147: /app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary.runfiles/com_github_yourbase_yourbase/external/nodejs/bin/node: cannot execute binary file: Exec format error
/app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary: line 147: /app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary.runfiles/com_github_yourbase_yourbase/external/nodejs/bin/node: Success
It looks like the node binary being deployed wasn't built for the right platform where k8s is running (linux with amd64, normal Google GKE nodes).
The command I'm using to make the deployment:
bazel run --cpu=k8 //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply
I tried with other combinations of --platforms or --cpu but I couldn't get it to work.
The argument that felt like the closest was this:
$ bazel run --platforms=#bazel_tools//platforms:linux //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply
ERROR: While resolving toolchains for target //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply: Target constraint_value rule #bazel_tools//platforms:linux was found as the target platform, but does not provide PlatformInfo
ERROR: Analysis of target '//examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply' failed; build aborted: Target constraint_value rule #bazel_tools//platforms:linux was found as the target platform, but does not provide PlatformInfo
INFO: Elapsed time: 0.255s
FAILED: Build did NOT complete successfully (0 packages loaded)
ERROR: Build failed. Not running target
This says there was a toolchain problem, and says something about missing PlatformInfo, but it doesn't say anything about how do I fix it. :-(
For what is worth, with Go I had similar problems, and they were solved by passing --experimental_platforms=#io_bazel_rules_go//go/toolchain:linux_amd64 to the bazel run command.
How do I make something similar work for nodejs?