I am setting up a deployment for my company's API Server and it fails to connect to MongoAtlas via IAM role
What I've done:
Set up a IAM Role on AWS (APIServer)
Set up a Database User on Atlas with the role (arn:aws:iam::< my aws acc id >:role/APIServer)
Configure a launch template and an auto scaling group, launching Amazon Linux EC2 instances with the role
Have my NodeJS application connect to my Atlas with the following setting:
key
value
URL
mongodb+srv://dev.< cluster >.mongodb.net/< DB >
authSource
'$external'
authMechanism
'MONGODB-AWS'
I ended up receiving the following error message
MongoServerError: bad auth : user arn:aws:sts::<my aws acc id>:assumed-role/APIServer/* is not found
Note: the info enclosed in <> are intentionally replaced, since I have already found several solutions pointing out having <> as part of the password, which is not my case here
I have the same problem, the only solution I found until now is to create a aws iam user (not a role, so you can generate security credentials), set up that user on atlas and put the security credentials of the user in a .env file in your node application. In this way mongodb will retrieve automatically that user and it works because you are not assuming a role, so the arn is correct.
This is still not a good solution to me, because I do not want to manage those keys. Best thing you can do probably is storing those credentials in aws secrets manager, give the ec2 role the permission to retrieve that secret, and when you start the instance with a script you can automatically retrieve the secrete and create the .env file.
(Still a disappointing solution to me)
Edit:
As I self answered in this post: AWS EC2 connection to MongoDB Atlas failing, could not find user
The solution for me was to change the scope of the Atlas user.
You need AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN to connect. You can get/create them from AWS IAM > Users dashboard. I recommend for mongo create a new secret key.
After creation you can get your connection string from mongodb website if you have already a cluster.
Example URL:
mongodb+srv://<AWS access key>:<AWS secret key>#cluster0.zphy22p.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>
Related
I have elastic-search domain placed inside a VPC. I am able to connect to the domain from VPC with no issues. But for more security, I want authentication to be username:password based. I am using using elasticsearch-dsl to make the connection. Any idea how to setup a username and password based connection to the domain?
I tried updating the domain config, in order to set MasterUser and MasterPassword(Not sure if the right process).
aws es update-elasticsearch-domain-config --domain-name test-domain --advanced-security-options Enabled=true,InternalUserDatabaseEnabled=true
I get this error:
An error occurred (BaseException) when calling the UpdateElasticsearchDomainConfig operation: You don't have permissions to enable Advanced Security options.
Is this the right thing to do? If not, How can we enable password based authentication?
An error occurred (BaseException) when calling the UpdateElasticsearchDomainConfig operation: You don't have permissions to enable Advanced Security options.
The above error you are getting indicates you don't have the permissions to update the advanced security configuration.
You need to use the master user credentials when calling the update api.
Refer to the documentation here to get more info on master user:
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html
I'm trying to make HTTP requests from an EC2 instance running Node.js inside elastic beanstalk to AWS Elastic Search (for insertions/index deletions/queries etc.). My issue is with the way AWS handles authentication.
There is no SDK for querying/updating etc. the documents inside elastic search indices. (There is one for managing the domain). Their recommended way to sign the requests is given here. In short, they use the AWS.Signers.V4 class to add the credentials to the HTTP headers, and they require the access key, secret key and session token.
The EC2 instance I'm working with does not store credentials in the environment (decision not in my hands) or in the file, which is how I was getting the credentials on my machine. It already has the correct role to access the elastic search node, I need the best way to extract the three credentials (access key, secret key and session token) since they are passed as an argument to the addAuthorization method. I tried logging the CredentialProviderChain but none of the providers had any stored credentials. Logging this locally shows both the environment variable and shared credentials file with the correct credentials as expected. I was told I should not use the assume role API (which does return the credentials), and it didn't make sense to me either since I was assuming a role the EC2 already had lol
I came across this method for retrieving instance metadata, including the security credentials. Is this my best option? Or is there an alternative I haven't considered? I'm not too thrilled about this since I'd have to add some logic to check if the process is running in the EC2 instance (so I can test locally when it's not) so it's not as clean a solution as I was expecting and I want to make sure I've explored all possibilities.
P.S. How do AWS SDKs handle authentication? I think I'd have the best chance of getting my changes approved if I use the same approach AWS uses, since elastic search is the only service we have to manually sign requests for. All the others get handled by the SDK.
The easiest and a very good practice is to use SSM. System Manager has a Parameter Store and it lets you save encrypted credentials. Then all you need to do is assign an IAM Role to the EC2 with a Policy to access SSM or just edit the existing Role and give Full-SSM access to get it going then lock it down to Least Privilege.
but wouldn't that be an issue when our credentials change often? Wouldn't we have to update those every time our credentials expired?
IAM users have rotating passwords, you need a service account password.
By default the EC2 has access to somethings because when you spin one up you have to assign the EC2 with an IAM role. Also, most EC2 AMI's come with the AWS CLI & SDK installed, so you can straight away fetch SSM Parameter store values. Here is some Python to demo:
ssm = boto3.client('ssm', region_name='ap-southeast-2', config=Config(proxies={'http': 'proxy:123', 'https': 'proxy:123'}))
key = "thePasswordKeyToGetTheValue"
response = ssm.get_parameter(Name=key, WithDecryption=True)
value = response['Parameter']['Value']
The answer was shockingly simple and is actually documented here. The AWS.config object has a getCredentials method that loads the default credentials into AWS.config.credentials and can be accessed from there, inside EC2 as well.
My guess was that it's using the EC2 instance metadata since that is indeed supposed to contain credentials but I'm not sure why when I tried logging the EC2 instance metadata provider in the CredentialProviderChain I got an empty object, whereas logging the same on my machine showed both the SharedIniFileCredentials and the Environment Credentials as containing the creds.
I'm trying to hide the credentials to my boto3 client situated in a kivy app (python3). The boto3 is being used for SecretsManager to hold other credentials for RDS db access.
client = session.client(
service_name='secretsmanager',
region_name=region_name,
aws_access_key_id='AK########.....'
I don't want to hard code my access key, secret etc
I've thought about assigning a specific IAM role to this client which would in theory give me the role/access required to boto3 but don't know exactly how to go about this.
I also use cognito for login (auth); I could possibly set up a group which is attached to these users and then get the creds/access to the boto3 client via this (which I think would work).
Is there a better solution to this or is my workflow all wrong?!
Many thanks!
An alternative to hardcoding your Access Key ID and Secret Access Key would be to use Amazon Cognito Identity Pools, which generate roles with a set of permissions. I would recommend you to look into the GetId and GetCredentialsForIdentity API calls in Boto3.
According to the docs, there are multiple ways to do so, you could read from environment variables, file, shared file config (the .aws/credentials file...).
I would recommend using a secret from a vault to get those keys.
If you're looking for a fast way, using the shared creds (item 4) inside the host would not be a big problem (IMO).
If any problem occurs, you just disable those creds and generate a new one.
After playing around with the nice browser GUIs of developer.amazon.com and aws.amazon.com things getting serious and now I want to use ask-cli to initialize Alexa skills and their lambda functions.
When I want to ask init, it tells me, I should select a profile or create a new one. Both jump to the browser and use OAuth to authenticate my ask installation.
~ ask init
? Please create a new profile or overwrite the existing profile.
(Use arrow keys)
──────────────
❯ Create new profile
──────────────
Profile Associated AWS Profile
[default] ** NULL **
[aws_profile] ** NULL **
But the AWS Profile will not associate my AWS Profile.
My LamdaFunction will not load/deploy, if I don't connect the profile with AWS.
No AWS credential setup for profile: [default].
Lambda clone skipped. CLI lambda functionalities can be enabled
by running `ask init` again to add 'aws_profile' to ASK cli_config
How could I connect my ask-cli correctly?
You need to first download and install aws CLI on your local machine. You can download aws CLI from below link.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
If you are working with your root account user then you have to generate your access ID and secret key else you have to create new IAM user with lambda execute permission.
You can generate root user access ID and secret key from below link.
https://console.aws.amazon.com/iam/home
Then click on Manage Security credential as shown in below image and after ignoring the warnings you can process and click Access Key ID and secret key link and generate new one and copy them.
Run aws configure command to configure your aws account and it will ask for Access ID and secret key that you generated and will setup your aws account.
aws configure
After aws configure, you can run again ask init command again to configure account.
ask init
Well, you don't need to setup an aws-cli when you already have an ask-cli installed.
Instead you can run below command to setup AWS credentials and link to an ASK profile, if somehow aws credentials are not setup.
ask init --aws-setup
Then, you will be prompted to enter your profile name, your access key, and your secret access key. You can use profile name as default if you have not created multiple ASK profiles. This will automatically create an aws credentials file at %USERPROFILE%.aws\credentials location. Now every time you try to deploy/access lambda code through ask-cli, it will access the credentials from this file.
For my ExpressJS/NodeJS app server, which uses its own AWS account, I need to access resources in a different AWS account. I have set up an IAM Role that should allow my server to access this other AWS account.
But the AWS documentation on how to get the credentials using this IAM Role are a little thin.
It seems like I might want to use AWS.STS's assumeRole() to get back the credentials. Is that the best way to get credentials for this Role?
And if I use assumeRole(), then once I receive the credentials in its callback, how do make use of them so that subsequent calls to DynamoDB and S3, for example, will operate on this different AWS account? Would I set the credentials into AWS.config.credentials, for example?
Suggestions and code examples would be most welcome!
Thanks.
-Allan