We have a Java product which may put and get data from Amazon S3. We already successfully implemented a class responsible of that by using the Amazon SDK for Java.
But, as far as I know, to interact with Amazon S3 through the library, you have to instantiate an AmazonS3Clientobject by providing an access key and a secret key.
Even if this technique can be relatively safe by using Amazon IAM and restrict access of the keys to the specific bucket of S3 you want to pull and put data, it is still a security hole in a sense that someone could decompile your application to extract the keys and use them to access your bucket's data.
Is there a safer way to do that ? Can we avoid to embed the AWS credentials in the application ? Can we make a REST call to our server to sign the requests (and keep the keys secret) ?
Take a look at the Amazon Secure Token Service (JavaDoc) as well as the Token Vending Machine to see if these help to resolve your issue.
Related
We're running a server on AWS that will be using a few constants. These constants may be details that are confidential like a few API tokens, Client secrets & even DB credentials. We have been saving these details in one of our files on the server itself (say Credentials.js). So,
What is the best possible way to store these Credentials and in a secure manner.
We were also planing to switch to AWS SSM parameter store. Is it worth considering it? It also provides KMS encryption to confidential parameters.
Even if we do switch to AWS SSM Parameter store, we will have to call them multiple times when we make requests to third-party application servers (as we'll need the API tokens for those apps). Does this justify the cost we'll pay for SSM (Considering we take Standard store with High throughput) ?
Also, Please let me know if there are there alternatives to securely store these Parameters.
Thanks.
Secret Manager
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.
To get an overview how it look like, see AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely.
Cost
See Pricing. $0.40 USD per secret per month and $0.05 per 10,000 API calls.
Documents
Tutorials - Start here to get the ideas
secrets_getsecretvalue.js - Example to get secrets in JS
JS SDK for Secret Manager - Look further here to know the JS SDK
CreatSecretAPI - AWS API to create a secret for the detailed references
Create a secret via the AWS console or using SDK. See Creating a secret. A secret is a key/value pair where the value is in JSON format.
Alternatives
Hashicorp Vault
Static Secrets: Key/Value Secrets Engine
Vault JS client
Lambda
Use a lambda which only accepts an access from those with a specific IAM role/permission attached to the IAM profile of an EC2 instance to run your app.
Others
Just Googling "parameter store for secret management" showed bunch of articles and how-to. Please do the research first.
I'm trying to make HTTP requests from an EC2 instance running Node.js inside elastic beanstalk to AWS Elastic Search (for insertions/index deletions/queries etc.). My issue is with the way AWS handles authentication.
There is no SDK for querying/updating etc. the documents inside elastic search indices. (There is one for managing the domain). Their recommended way to sign the requests is given here. In short, they use the AWS.Signers.V4 class to add the credentials to the HTTP headers, and they require the access key, secret key and session token.
The EC2 instance I'm working with does not store credentials in the environment (decision not in my hands) or in the file, which is how I was getting the credentials on my machine. It already has the correct role to access the elastic search node, I need the best way to extract the three credentials (access key, secret key and session token) since they are passed as an argument to the addAuthorization method. I tried logging the CredentialProviderChain but none of the providers had any stored credentials. Logging this locally shows both the environment variable and shared credentials file with the correct credentials as expected. I was told I should not use the assume role API (which does return the credentials), and it didn't make sense to me either since I was assuming a role the EC2 already had lol
I came across this method for retrieving instance metadata, including the security credentials. Is this my best option? Or is there an alternative I haven't considered? I'm not too thrilled about this since I'd have to add some logic to check if the process is running in the EC2 instance (so I can test locally when it's not) so it's not as clean a solution as I was expecting and I want to make sure I've explored all possibilities.
P.S. How do AWS SDKs handle authentication? I think I'd have the best chance of getting my changes approved if I use the same approach AWS uses, since elastic search is the only service we have to manually sign requests for. All the others get handled by the SDK.
The easiest and a very good practice is to use SSM. System Manager has a Parameter Store and it lets you save encrypted credentials. Then all you need to do is assign an IAM Role to the EC2 with a Policy to access SSM or just edit the existing Role and give Full-SSM access to get it going then lock it down to Least Privilege.
but wouldn't that be an issue when our credentials change often? Wouldn't we have to update those every time our credentials expired?
IAM users have rotating passwords, you need a service account password.
By default the EC2 has access to somethings because when you spin one up you have to assign the EC2 with an IAM role. Also, most EC2 AMI's come with the AWS CLI & SDK installed, so you can straight away fetch SSM Parameter store values. Here is some Python to demo:
ssm = boto3.client('ssm', region_name='ap-southeast-2', config=Config(proxies={'http': 'proxy:123', 'https': 'proxy:123'}))
key = "thePasswordKeyToGetTheValue"
response = ssm.get_parameter(Name=key, WithDecryption=True)
value = response['Parameter']['Value']
The answer was shockingly simple and is actually documented here. The AWS.config object has a getCredentials method that loads the default credentials into AWS.config.credentials and can be accessed from there, inside EC2 as well.
My guess was that it's using the EC2 instance metadata since that is indeed supposed to contain credentials but I'm not sure why when I tried logging the EC2 instance metadata provider in the CredentialProviderChain I got an empty object, whereas logging the same on my machine showed both the SharedIniFileCredentials and the Environment Credentials as containing the creds.
I'm trying to hide the credentials to my boto3 client situated in a kivy app (python3). The boto3 is being used for SecretsManager to hold other credentials for RDS db access.
client = session.client(
service_name='secretsmanager',
region_name=region_name,
aws_access_key_id='AK########.....'
I don't want to hard code my access key, secret etc
I've thought about assigning a specific IAM role to this client which would in theory give me the role/access required to boto3 but don't know exactly how to go about this.
I also use cognito for login (auth); I could possibly set up a group which is attached to these users and then get the creds/access to the boto3 client via this (which I think would work).
Is there a better solution to this or is my workflow all wrong?!
Many thanks!
An alternative to hardcoding your Access Key ID and Secret Access Key would be to use Amazon Cognito Identity Pools, which generate roles with a set of permissions. I would recommend you to look into the GetId and GetCredentialsForIdentity API calls in Boto3.
According to the docs, there are multiple ways to do so, you could read from environment variables, file, shared file config (the .aws/credentials file...).
I would recommend using a secret from a vault to get those keys.
If you're looking for a fast way, using the shared creds (item 4) inside the host would not be a big problem (IMO).
If any problem occurs, you just disable those creds and generate a new one.
I am building an application that will run on Google App Engine (GAE). It will need access to data stored by the user in other systems (e.g. the user's Nest thermostat, Yahoo mail). The application running on GAE will allow the user to provide credentials for the other system. The application will store these credentials in Google Cloud (Datastore) for later use by an application running on Google Compute Engine on the users behalf. The application will also allow OAuth to allow the user to allow the application access the external system in the user's behalf. The application will need to store user credentials (username and passwords) or OAuth access tokens in the Google Cloud.
The application will need to encrypt the secrets before they are stored and be able to unencrypt the data to send it to the external systems. That is, the system will need to use symmetric encryption and therefor need to securely manage keys.
How can the application store these secrets in the Google Cloud Datastore (Datastore) securely? I think I am looking for something like the AWS CloudHSM for Google. That is, I would like to store each secret with a seed and key id and use the key id to get the key from a key management system. This implementation would also allow for key rotation and other standard security practices.
I think I am looking for a Google Cloud service or Google API that provides secrets management and only allows an app with the proper Google app identifier to access the secrets.
Is there a service within Google Cloud or Google APIs that will manage secrets? Is there another architecture that I should be considering?
By the way, the application uses Google Identity Toolkit (GitKit) to authenticate and authorize users to use the GAE hosted application. The application allows users to create accounts using either federate identities or username and passwords via GitKit.
Thanks,
chris
In the meantime, Google also added a Key Management Service: https://cloud.google.com/kms/
You could e.g. use it to encrypt your data before storing it in a database. Or, use KMS to encrypt an AES key to encrypt your data, and possibly keep a backup of your AES key somewhere in case you lose access to KMS.
App Identity Service might be what you are looking for https://cloud.google.com/appengine/docs/java/appidentity/#Java_Asserting_identity_to_other_systems
It lets you sign content with an application-specific private key, and provides rotating certificates to validate signed content.
So as far as I can tell the answer is that you can't. What you are looking for is an equivalent to KMS. That service let's you create and manage keys and do a bunch of your own crypto stuff. It's really cool and it will allow you to quickly do incredibly strong crypto with just a few simple lines of code. Azure has a similar service called KeyVault. It lacks automated key generation and rotation as far as I can tell, but other than that it's good. At the time of this response there was not an equivalent service for Google. They have an internal KMS which they used for crypto operations and you can provide your own keys, but that's pretty much it. Not quite the same thing that you get on KeyVault, and nothing like KMS.
That said there is hope. You can do one of two things:
Create a VPC and use an HSM from somewhere else. You could use RackSpace, or you could simply use AWS KMS. That sounds crazy but it's actually a good idea and the extra management is worth it. In general the most secure solution separate the keys from the encrypted data, particularly at rest. That means that keys in one data center and encrypted data stored in another data center is the most secure solution. That sounds like hard stuff, but thankfully I've made an opensource project which makes it very easy for you called KeyStor. With KeyStor you can get a data center that deals with encryption services set up in a day, no problem, and you can use AWS very cost effectively.
Set up your own cypto service, skip the HSM integration and simply be careful about who has access to the machines that maintain your keys. You can do this with KeyStor as well, and if KeyStor doesn't quite do what you want, that's why it's open-source. Take the code and build what you need to build.
You could store secrets in storage (e.g., in Datastore, Google Cloud Storage, or another storage system of your choice) and encrypt those with a key from Google's Cloud KMS.
Here's some documentation from Google on secret management, and here's a codelab on specifically encrypting data in Google Cloud Storage at the application layer using Cloud KMS.
For the Google Cloud managed service that provides the API for secure storage of secrets, see Google Cloud Secret Manager for more details.
Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud.
I am building an app that needs to upload files to S3. Initially, I had my secret key in the web.config file. Since they key has access to my entire account, I am realizing that instead I should rely on the IAM services to just generate a user for accessing a bucket.
However, this doesn't solve the problem of storing the key in plain text. How can I manage it otherwise?
Actually IAM permissions to S3 do solve your problem because the user that you'll create will be only allowed to access this specific bucket - it can't do any harm to your account and you don't have to store the access/secret keys on the machine.
Further, you can restrict access to a bucket to a specific IP.