boto3 python SetupAWS.py setup "aws configure" - python-3.x

I am trying to setup AWS IoT esp32 msys2, I followed the instructions but cannot solve the problem to setup the policy, certificates using the following command:
python SetupAWS.py setup
But I get this error message, I have installed and setup AWS configuration
AWS not configured. Please run `aws configure`.
All the paths seem to be connection, but cannot work out what could be the problem.

To fix the problem, I deleted everything and re-installed the complete developed suite including the AWS aws-cli tools.

Related

sam local invoke failing in nodejs18.x environment

A while ago AWS announced that they had released nodejs18 runtime env support for serverless lambda and lambda#edge services.
I tried to upgrade the whole project to node18.x.
There was first some issue with sam which didn't recognize runtime nodejs18.x.
After upgrading the sam-cli to the latest version. It sounds like it looks for some emulation environment which is not out there. even in amazon ecr public registry.
our service prominently depends on integration testing which uses sam local invoke in it.
and sam local invoke returns the following:
Building image.....................
Failed to build Docker Image
NoneType: None
Error: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
Does anyone know any workarounds like a custom dockerfile or similar stuff?
update:
sounds like there is a nodejs18.x emulation for sam. but still get the same error on sam local invoke.
update2:
found the issue. I was on a macOS machine and it turned out that I need to declare architecture inside my template file like the following under properties section of serverless lambda conf:
Type: 'AWS::Serverless::Function'
Properties:
Architectures:
- arm64

Running a bash script in nodejs application deployed on amazon ECS

I have a nodejs application that is deployed on Amazon ECS. I have setup codepipeline to automatically build and deploy the application. The application is dockerized and is deployed on a ubuntu machine. The application is working fine. However, there is a requirement to run a shell script from within the application. I am calling the shell script using await exec(path/to/shellscript). However, I keep getting the following error:
2021-02-08 17:19:48FAILED: undefined, 4e9d8424-3cfd-4f35-93cb-fac886b1c4918fc9f680cfea45ec813db787f8b8380a
2021-02-08 17:19:48/bin/sh: src/myApp/myScript.bash: not found
I have tried giving it permission using chmod but I keep getting errors still.
Any help is appreciated.
I realized that the script that I am running calls an executable created to run on Ubuntu. However, the docker image is an alpine Linux distribution. The issue has been explained in detail here:
Why can't I run a C program built on alpine on ubuntu?

What would be causing -bash: /usr/bin/aws: No such file or directory?

Aside from the obvious "no such file or directory" which is true...there is in fact no aws file in that location, this is coming up after attempting the final installation step of the V2 AWS-CLI installation routine found here. (aws --version)
Now, I have two Ubuntu systems side by side. I ran the same install on both and one succeeded, but the other did not. On the one that succeeded, there also is no AWS file in the path suggested by the error.
Furthermore, on both systems, the folder structure and symlinks of the installation appear to be identical. I'm using the root user on both, and comparing the file permissions on the system that failed with the one that works yields identical setups.
I suspect that AWS has been setup to point to the wrong path? Both $PATH environments are also identical on each machine.
The ONLY difference between the two machines is that one is Ubuntu 18.04 and the other is 20.04.
Any ideas what would be different about that and why I am unable to run AWS commands on this stubborn box?
Short answer:
run hash aws on shell
Details:
awscli v1 points to /usr/bin/aws.
awscliv2 points to /usr/local/bin/aws
On uninstalling awscliv1 and installing awscliv2, aws was still pointing to /usr/bin/aws, while which aws resulted in /usr/local/bin/aws.
Seems bash has cached the path /usr/bin/aws for aws executable.
$ which aws
/usr/local/bin/aws
$ aws
-bash: /usr/bin/aws: No such file or directory
So running any aws command would look for /usr/bin/aws (non-existing)
-bash: /usr/bin/aws: No such file or directory
hash aws clears this cache. After this, firing aws commands uses the correct path
$ aws --version
aws-cli/2.2.32 Python/3.8.8 Linux/5.4.0-77-generic exe/x86_64.ubuntu.18 prompt/off
Follow these steps from [https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html]:
In your browser, download the macOS pkg file:
[https://awscli.amazonaws.com/AWSCLIV2.pkg]
Run your downloaded file and follow the on-screen instructions.
To verify
$ which aws
/usr/local/bin/aws
$ aws --version
aws-cli/2.7.24 Python/3.8.8 Darwin/18.7.0 botocore/2.4.5

AWS IoT basicPubSub.py example - clarification on certificates (CLI)

I am been trying to get AWS IoT working and just keep hitting problems, errors and not getting anywhere. I am trying to use the AWS IoT basicPubSub.py script to test the connection, but getting an error:
ssl.SSLError: unknown error (_ssl.c:3946)
I have been through all the certificates several times, but want to check/fully understand if I can pull the rootCAFile, certfile and privatekey from the command line utility and/or from the IAM interface? I have download each piece of information and stored in local files.
python basicPubSub.py -e <endpoint> -r <rootCAFilePath> -c <certFilePath> -k <privateKeyFilePath>
The main aim is just to ensure everything is correct or I have another problem something else? Is there a way to test each certificate to ensure each file is correct and has the right information?
I am not sure how I managed to fix this problem, I tried the following items to fix the problem:
Re-created all the certificates
Re-installated CLI using sudo
Installed ssl (sudo apt-get install -y libssl-dev)
I going to do a fresh installation on my RPi and repeat the steps to understand how this was resolved and fixed.

Google Cloud SDK - Is there a way to manually install google cloud sdk on Linux without internet access?

I am trying to install Google Cloud SDK on a Linux machine without any Internet access.
I am following the instructions at: https://cloud.google.com/sdk/?hl=en
I downloaded the tar file on my local machine and transferred it to the Linux machine using scp. I then ran the install.sh file and got the following error:
[me#user google-cloud-sdk]$ ./install.sh
Welcome to the Google Cloud SDK!
To help improve the quality of this product, we collect anonymized data on how
the SDK is used. You may choose to opt out of this collection now (by choosing
'N' at the below prompt), or at any time in the future by running the following
command:
gcloud config set --scope=user disable_usage_reporting true
Do you want to help improve the Google Cloud SDK (Y/n)? n
This will install all the core command line tools necessary for working with
the Google Cloud Platform.
/home/me/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py:661: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
exc_message = getattr(exc, 'message', None)
/home/me/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py:664: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
msg = u'({0}) {1}'.format(command_path_string, exc.message)
ERROR: (gcloud.components.update) Failed to fetch component listing from server. Check your network settings and try again.
I have a proxy server that I can use to access the internet from this Linux machine. I tried running install.sh as 'sh install.sh --proxy host:port' but obviously, there is no input parameter called proxy to install.sh.
How can I work around this problem?
Thanks in advance.
Exported my proxy details as "export https_proxy='...'" before running the install.sh file.
This worked for me.
Go to Advanced System settings and create a variable HTTPS_PROXY,
restart CMD.
echo %HTTPS_PROXY%
To make sure it has taken the changes into account.
Launch the install.bat

Resources