I'm trying to make HTTP requests from an EC2 instance running Node.js inside elastic beanstalk to AWS Elastic Search (for insertions/index deletions/queries etc.). My issue is with the way AWS handles authentication.
There is no SDK for querying/updating etc. the documents inside elastic search indices. (There is one for managing the domain). Their recommended way to sign the requests is given here. In short, they use the AWS.Signers.V4 class to add the credentials to the HTTP headers, and they require the access key, secret key and session token.
The EC2 instance I'm working with does not store credentials in the environment (decision not in my hands) or in the file, which is how I was getting the credentials on my machine. It already has the correct role to access the elastic search node, I need the best way to extract the three credentials (access key, secret key and session token) since they are passed as an argument to the addAuthorization method. I tried logging the CredentialProviderChain but none of the providers had any stored credentials. Logging this locally shows both the environment variable and shared credentials file with the correct credentials as expected. I was told I should not use the assume role API (which does return the credentials), and it didn't make sense to me either since I was assuming a role the EC2 already had lol
I came across this method for retrieving instance metadata, including the security credentials. Is this my best option? Or is there an alternative I haven't considered? I'm not too thrilled about this since I'd have to add some logic to check if the process is running in the EC2 instance (so I can test locally when it's not) so it's not as clean a solution as I was expecting and I want to make sure I've explored all possibilities.
P.S. How do AWS SDKs handle authentication? I think I'd have the best chance of getting my changes approved if I use the same approach AWS uses, since elastic search is the only service we have to manually sign requests for. All the others get handled by the SDK.
The easiest and a very good practice is to use SSM. System Manager has a Parameter Store and it lets you save encrypted credentials. Then all you need to do is assign an IAM Role to the EC2 with a Policy to access SSM or just edit the existing Role and give Full-SSM access to get it going then lock it down to Least Privilege.
but wouldn't that be an issue when our credentials change often? Wouldn't we have to update those every time our credentials expired?
IAM users have rotating passwords, you need a service account password.
By default the EC2 has access to somethings because when you spin one up you have to assign the EC2 with an IAM role. Also, most EC2 AMI's come with the AWS CLI & SDK installed, so you can straight away fetch SSM Parameter store values. Here is some Python to demo:
ssm = boto3.client('ssm', region_name='ap-southeast-2', config=Config(proxies={'http': 'proxy:123', 'https': 'proxy:123'}))
key = "thePasswordKeyToGetTheValue"
response = ssm.get_parameter(Name=key, WithDecryption=True)
value = response['Parameter']['Value']
The answer was shockingly simple and is actually documented here. The AWS.config object has a getCredentials method that loads the default credentials into AWS.config.credentials and can be accessed from there, inside EC2 as well.
My guess was that it's using the EC2 instance metadata since that is indeed supposed to contain credentials but I'm not sure why when I tried logging the EC2 instance metadata provider in the CredentialProviderChain I got an empty object, whereas logging the same on my machine showed both the SharedIniFileCredentials and the Environment Credentials as containing the creds.
Related
I am setting up a deployment for my company's API Server and it fails to connect to MongoAtlas via IAM role
What I've done:
Set up a IAM Role on AWS (APIServer)
Set up a Database User on Atlas with the role (arn:aws:iam::< my aws acc id >:role/APIServer)
Configure a launch template and an auto scaling group, launching Amazon Linux EC2 instances with the role
Have my NodeJS application connect to my Atlas with the following setting:
key
value
URL
mongodb+srv://dev.< cluster >.mongodb.net/< DB >
authSource
'$external'
authMechanism
'MONGODB-AWS'
I ended up receiving the following error message
MongoServerError: bad auth : user arn:aws:sts::<my aws acc id>:assumed-role/APIServer/* is not found
Note: the info enclosed in <> are intentionally replaced, since I have already found several solutions pointing out having <> as part of the password, which is not my case here
I have the same problem, the only solution I found until now is to create a aws iam user (not a role, so you can generate security credentials), set up that user on atlas and put the security credentials of the user in a .env file in your node application. In this way mongodb will retrieve automatically that user and it works because you are not assuming a role, so the arn is correct.
This is still not a good solution to me, because I do not want to manage those keys. Best thing you can do probably is storing those credentials in aws secrets manager, give the ec2 role the permission to retrieve that secret, and when you start the instance with a script you can automatically retrieve the secrete and create the .env file.
(Still a disappointing solution to me)
Edit:
As I self answered in this post: AWS EC2 connection to MongoDB Atlas failing, could not find user
The solution for me was to change the scope of the Atlas user.
You need AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN to connect. You can get/create them from AWS IAM > Users dashboard. I recommend for mongo create a new secret key.
After creation you can get your connection string from mongodb website if you have already a cluster.
Example URL:
mongodb+srv://<AWS access key>:<AWS secret key>#cluster0.zphy22p.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>
I have a pretty standard application written in Java which also runs queries against a DB. The application resides on GCP and the DB on Atlas.
For understandable reasons, I don't want to keep the username and password for the DB in the code.
So option number 1 that I had in mind, is to pass the username and password as environment variables to the application container in GCP.
Option number 2 is using Secret Manager in GCP and store my username and password there, and pass the GCP Credentials as an environment variable to the application container in GCP.
My question is, what is the added value of option number 2 if it has any? It seems that option 2 is even worse from a security aspect since if some hacker gets the google credentials, it has access to all of the secrets stored in the Secret Manager.
I don't know what are the best practices and what is advised to do in such cases. Thank you for your help.
Having credentials in GCP secret manager will help you to keep track of all the secrets and changes in a centralized location and access globally from any of your app.
For a standard application where one JAVA is connecting to a DB, may not add much values.
You may look into kubernetes secret for that reason.
If your application resides in GCP, you don't need a service account key file (which is your security concern, and you are right. I wrote an article on this)
TL;DR use ADC (Application Default Credential) to automatically get the service account credential provided automatically on Google Cloud Component (look at metadata server for more details).
Then grant this component identity (by default or user defined, when supported), i.e. the service account email, to access to your secrets.
And that's all! You haven't secrets in your code and your environment variable, neither the login/password, nor the service account key file.
If you have difficulties to use ADC in Java, don't hesitate to share your code. I will be able to help your to achieve this.
To use Secret Manager on Google Cloud you need to install the Secret Manager Java SDK Libraries. This documentation shows how to get started with the Cloud Client Libraries for the Secret Manager API, you only need to go to the Java section.
This Libraries helps you to access your keys in order that it can be used by your app.
The following link shows how to get details about a secret by viewing its metadata. Keep in mind that viewing a secret's metadata requires the Secret Viewer role (roles/secretmanager.viewer) on the secret, project, folder, or organization.
I recommend you to create a special Service Account to handle the proper permissions for your app, because if you don’t have a SA defined, the default SA is what is going to generate the request, and it is not secure. you can learn more about how to create a service account in this link
On the other hand, you can find an example on how you can use the following guide that contains a good example of finding your credentials automatically, that's more convenient and secure than manually passing credentials.
I'm trying to hide the credentials to my boto3 client situated in a kivy app (python3). The boto3 is being used for SecretsManager to hold other credentials for RDS db access.
client = session.client(
service_name='secretsmanager',
region_name=region_name,
aws_access_key_id='AK########.....'
I don't want to hard code my access key, secret etc
I've thought about assigning a specific IAM role to this client which would in theory give me the role/access required to boto3 but don't know exactly how to go about this.
I also use cognito for login (auth); I could possibly set up a group which is attached to these users and then get the creds/access to the boto3 client via this (which I think would work).
Is there a better solution to this or is my workflow all wrong?!
Many thanks!
An alternative to hardcoding your Access Key ID and Secret Access Key would be to use Amazon Cognito Identity Pools, which generate roles with a set of permissions. I would recommend you to look into the GetId and GetCredentialsForIdentity API calls in Boto3.
According to the docs, there are multiple ways to do so, you could read from environment variables, file, shared file config (the .aws/credentials file...).
I would recommend using a secret from a vault to get those keys.
If you're looking for a fast way, using the shared creds (item 4) inside the host would not be a big problem (IMO).
If any problem occurs, you just disable those creds and generate a new one.
For my ExpressJS/NodeJS app server, which uses its own AWS account, I need to access resources in a different AWS account. I have set up an IAM Role that should allow my server to access this other AWS account.
But the AWS documentation on how to get the credentials using this IAM Role are a little thin.
It seems like I might want to use AWS.STS's assumeRole() to get back the credentials. Is that the best way to get credentials for this Role?
And if I use assumeRole(), then once I receive the credentials in its callback, how do make use of them so that subsequent calls to DynamoDB and S3, for example, will operate on this different AWS account? Would I set the credentials into AWS.config.credentials, for example?
Suggestions and code examples would be most welcome!
Thanks.
-Allan
We have a Java product which may put and get data from Amazon S3. We already successfully implemented a class responsible of that by using the Amazon SDK for Java.
But, as far as I know, to interact with Amazon S3 through the library, you have to instantiate an AmazonS3Clientobject by providing an access key and a secret key.
Even if this technique can be relatively safe by using Amazon IAM and restrict access of the keys to the specific bucket of S3 you want to pull and put data, it is still a security hole in a sense that someone could decompile your application to extract the keys and use them to access your bucket's data.
Is there a safer way to do that ? Can we avoid to embed the AWS credentials in the application ? Can we make a REST call to our server to sign the requests (and keep the keys secret) ?
Take a look at the Amazon Secure Token Service (JavaDoc) as well as the Token Vending Machine to see if these help to resolve your issue.