How to configure Alexa ask-cli with a valid AWS profile? - alexa-skill

After playing around with the nice browser GUIs of developer.amazon.com and aws.amazon.com things getting serious and now I want to use ask-cli to initialize Alexa skills and their lambda functions.
When I want to ask init, it tells me, I should select a profile or create a new one. Both jump to the browser and use OAuth to authenticate my ask installation.
~ ask init
? Please create a new profile or overwrite the existing profile.
(Use arrow keys)
──────────────
❯ Create new profile
──────────────
Profile Associated AWS Profile
[default] ** NULL **
[aws_profile] ** NULL **
But the AWS Profile will not associate my AWS Profile.
My LamdaFunction will not load/deploy, if I don't connect the profile with AWS.
No AWS credential setup for profile: [default].
Lambda clone skipped. CLI lambda functionalities can be enabled
by running `ask init` again to add 'aws_profile' to ASK cli_config
How could I connect my ask-cli correctly?

You need to first download and install aws CLI on your local machine. You can download aws CLI from below link.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
If you are working with your root account user then you have to generate your access ID and secret key else you have to create new IAM user with lambda execute permission.
You can generate root user access ID and secret key from below link.
https://console.aws.amazon.com/iam/home
Then click on Manage Security credential as shown in below image and after ignoring the warnings you can process and click Access Key ID and secret key link and generate new one and copy them.
Run aws configure command to configure your aws account and it will ask for Access ID and secret key that you generated and will setup your aws account.
aws configure
After aws configure, you can run again ask init command again to configure account.
ask init

Well, you don't need to setup an aws-cli when you already have an ask-cli installed.
Instead you can run below command to setup AWS credentials and link to an ASK profile, if somehow aws credentials are not setup.
ask init --aws-setup
Then, you will be prompted to enter your profile name, your access key, and your secret access key. You can use profile name as default if you have not created multiple ASK profiles. This will automatically create an aws credentials file at %USERPROFILE%.aws\credentials location. Now every time you try to deploy/access lambda code through ask-cli, it will access the credentials from this file.

Related

MongoAtlas AWS IAM Role Authentication

I am setting up a deployment for my company's API Server and it fails to connect to MongoAtlas via IAM role
What I've done:
Set up a IAM Role on AWS (APIServer)
Set up a Database User on Atlas with the role (arn:aws:iam::< my aws acc id >:role/APIServer)
Configure a launch template and an auto scaling group, launching Amazon Linux EC2 instances with the role
Have my NodeJS application connect to my Atlas with the following setting:
key
value
URL
mongodb+srv://dev.< cluster >.mongodb.net/< DB >
authSource
'$external'
authMechanism
'MONGODB-AWS'
I ended up receiving the following error message
MongoServerError: bad auth : user arn:aws:sts::<my aws acc id>:assumed-role/APIServer/* is not found
Note: the info enclosed in <> are intentionally replaced, since I have already found several solutions pointing out having <> as part of the password, which is not my case here
I have the same problem, the only solution I found until now is to create a aws iam user (not a role, so you can generate security credentials), set up that user on atlas and put the security credentials of the user in a .env file in your node application. In this way mongodb will retrieve automatically that user and it works because you are not assuming a role, so the arn is correct.
This is still not a good solution to me, because I do not want to manage those keys. Best thing you can do probably is storing those credentials in aws secrets manager, give the ec2 role the permission to retrieve that secret, and when you start the instance with a script you can automatically retrieve the secrete and create the .env file.
(Still a disappointing solution to me)
Edit:
As I self answered in this post: AWS EC2 connection to MongoDB Atlas failing, could not find user
The solution for me was to change the scope of the Atlas user.
You need AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN to connect. You can get/create them from AWS IAM > Users dashboard. I recommend for mongo create a new secret key.
After creation you can get your connection string from mongodb website if you have already a cluster.
Example URL:
mongodb+srv://<AWS access key>:<AWS secret key>#cluster0.zphy22p.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>

Added value of using Secret Manager

I have a pretty standard application written in Java which also runs queries against a DB. The application resides on GCP and the DB on Atlas.
For understandable reasons, I don't want to keep the username and password for the DB in the code.
So option number 1 that I had in mind, is to pass the username and password as environment variables to the application container in GCP.
Option number 2 is using Secret Manager in GCP and store my username and password there, and pass the GCP Credentials as an environment variable to the application container in GCP.
My question is, what is the added value of option number 2 if it has any? It seems that option 2 is even worse from a security aspect since if some hacker gets the google credentials, it has access to all of the secrets stored in the Secret Manager.
I don't know what are the best practices and what is advised to do in such cases. Thank you for your help.
Having credentials in GCP secret manager will help you to keep track of all the secrets and changes in a centralized location and access globally from any of your app.
For a standard application where one JAVA is connecting to a DB, may not add much values.
You may look into kubernetes secret for that reason.
If your application resides in GCP, you don't need a service account key file (which is your security concern, and you are right. I wrote an article on this)
TL;DR use ADC (Application Default Credential) to automatically get the service account credential provided automatically on Google Cloud Component (look at metadata server for more details).
Then grant this component identity (by default or user defined, when supported), i.e. the service account email, to access to your secrets.
And that's all! You haven't secrets in your code and your environment variable, neither the login/password, nor the service account key file.
If you have difficulties to use ADC in Java, don't hesitate to share your code. I will be able to help your to achieve this.
To use Secret Manager on Google Cloud you need to install the Secret Manager Java SDK Libraries. This documentation shows how to get started with the Cloud Client Libraries for the Secret Manager API, you only need to go to the Java section.
This Libraries helps you to access your keys in order that it can be used by your app.
The following link shows how to get details about a secret by viewing its metadata. Keep in mind that viewing a secret's metadata requires the Secret Viewer role (roles/secretmanager.viewer) on the secret, project, folder, or organization.
I recommend you to create a special Service Account to handle the proper permissions for your app, because if you don’t have a SA defined, the default SA is what is going to generate the request, and it is not secure. you can learn more about how to create a service account in this link
On the other hand, you can find an example on how you can use the following guide that contains a good example of finding your credentials automatically, that's more convenient and secure than manually passing credentials.

Authenticating EC2 on EB for AWS Elastic Search HTTP requests

I'm trying to make HTTP requests from an EC2 instance running Node.js inside elastic beanstalk to AWS Elastic Search (for insertions/index deletions/queries etc.). My issue is with the way AWS handles authentication.
There is no SDK for querying/updating etc. the documents inside elastic search indices. (There is one for managing the domain). Their recommended way to sign the requests is given here. In short, they use the AWS.Signers.V4 class to add the credentials to the HTTP headers, and they require the access key, secret key and session token.
The EC2 instance I'm working with does not store credentials in the environment (decision not in my hands) or in the file, which is how I was getting the credentials on my machine. It already has the correct role to access the elastic search node, I need the best way to extract the three credentials (access key, secret key and session token) since they are passed as an argument to the addAuthorization method. I tried logging the CredentialProviderChain but none of the providers had any stored credentials. Logging this locally shows both the environment variable and shared credentials file with the correct credentials as expected. I was told I should not use the assume role API (which does return the credentials), and it didn't make sense to me either since I was assuming a role the EC2 already had lol
I came across this method for retrieving instance metadata, including the security credentials. Is this my best option? Or is there an alternative I haven't considered? I'm not too thrilled about this since I'd have to add some logic to check if the process is running in the EC2 instance (so I can test locally when it's not) so it's not as clean a solution as I was expecting and I want to make sure I've explored all possibilities.
P.S. How do AWS SDKs handle authentication? I think I'd have the best chance of getting my changes approved if I use the same approach AWS uses, since elastic search is the only service we have to manually sign requests for. All the others get handled by the SDK.
The easiest and a very good practice is to use SSM. System Manager has a Parameter Store and it lets you save encrypted credentials. Then all you need to do is assign an IAM Role to the EC2 with a Policy to access SSM or just edit the existing Role and give Full-SSM access to get it going then lock it down to Least Privilege.
but wouldn't that be an issue when our credentials change often? Wouldn't we have to update those every time our credentials expired?
IAM users have rotating passwords, you need a service account password.
By default the EC2 has access to somethings because when you spin one up you have to assign the EC2 with an IAM role. Also, most EC2 AMI's come with the AWS CLI & SDK installed, so you can straight away fetch SSM Parameter store values. Here is some Python to demo:
ssm = boto3.client('ssm', region_name='ap-southeast-2', config=Config(proxies={'http': 'proxy:123', 'https': 'proxy:123'}))
key = "thePasswordKeyToGetTheValue"
response = ssm.get_parameter(Name=key, WithDecryption=True)
value = response['Parameter']['Value']
The answer was shockingly simple and is actually documented here. The AWS.config object has a getCredentials method that loads the default credentials into AWS.config.credentials and can be accessed from there, inside EC2 as well.
My guess was that it's using the EC2 instance metadata since that is indeed supposed to contain credentials but I'm not sure why when I tried logging the EC2 instance metadata provider in the CredentialProviderChain I got an empty object, whereas logging the same on my machine showed both the SharedIniFileCredentials and the Environment Credentials as containing the creds.

How not to hard code Boto3 client credentials in to Python3 / Kivy script

I'm trying to hide the credentials to my boto3 client situated in a kivy app (python3). The boto3 is being used for SecretsManager to hold other credentials for RDS db access.
client = session.client(
service_name='secretsmanager',
region_name=region_name,
aws_access_key_id='AK########.....'
I don't want to hard code my access key, secret etc
I've thought about assigning a specific IAM role to this client which would in theory give me the role/access required to boto3 but don't know exactly how to go about this.
I also use cognito for login (auth); I could possibly set up a group which is attached to these users and then get the creds/access to the boto3 client via this (which I think would work).
Is there a better solution to this or is my workflow all wrong?!
Many thanks!
An alternative to hardcoding your Access Key ID and Secret Access Key would be to use Amazon Cognito Identity Pools, which generate roles with a set of permissions. I would recommend you to look into the GetId and GetCredentialsForIdentity API calls in Boto3.
According to the docs, there are multiple ways to do so, you could read from environment variables, file, shared file config (the .aws/credentials file...).
I would recommend using a secret from a vault to get those keys.
If you're looking for a fast way, using the shared creds (item 4) inside the host would not be a big problem (IMO).
If any problem occurs, you just disable those creds and generate a new one.

AWS: How to get and use credentials for an IAM Role in NodeJS/ExpressJS

For my ExpressJS/NodeJS app server, which uses its own AWS account, I need to access resources in a different AWS account. I have set up an IAM Role that should allow my server to access this other AWS account.
But the AWS documentation on how to get the credentials using this IAM Role are a little thin.
It seems like I might want to use AWS.STS's assumeRole() to get back the credentials. Is that the best way to get credentials for this Role?
And if I use assumeRole(), then once I receive the credentials in its callback, how do make use of them so that subsequent calls to DynamoDB and S3, for example, will operate on this different AWS account? Would I set the credentials into AWS.config.credentials, for example?
Suggestions and code examples would be most welcome!
Thanks.
-Allan

Resources