I would like to use SSM Parameters in Serverless Variables.
Following the docs, I ran this command:
aws ssm put-parameter --name foo --value bar --type SecureString
And I added this to my serverless.yml:
custom:
foo: ${ssm:foo}
When I deploy, I get this warning however:
Serverless Warning --------------------------------------
A valid SSM parameter to satisfy the declaration 'ssm:foo' could not be found.
How do I access this variable? Thanks!
I needed to set the same region for both the serverless function, as well as the ssm variable assignment:
aws ssm put-parameter --name foo--value bar --type SecureString --region us-east-1
if the parameter is a SecureString, you need to add ~true after the path to the parameter on the serverless.yml file, as explained here: https://serverless.com/framework/docs/providers/aws/guide/variables#reference-variables-using-the-ssm-parameter-store
This will tell the framework to decrypt the value. Make sure that you have permissions to use the key used to encrypt the parameter.
Check your IAM policy. To get the parameters, the user doing the deployment needs access to SSM. This offers full access. See the docs to narrow it down a bit (ie: GetParameters, GetParameter).
"Effect": "Allow",
"Action": [
"ssm:*"
],
"Resource": [
"*"
]
Add this to the provider section in serverless.yml file
iamRoleStatements:
- Effect: "Allow"
Action:
- "ssm:GetParameters"
Resource: "*"
to use SSM variables, you need to prefix /aws/reference/secretsmanager/
example
${ssm:/aws/reference/secretsmanager/${self:provider.stage}/service/mysecret~true}
Related
I used 'null_resource' and pass aws cli to 'local-exec' to update stepfunction:
resource "null_resource" "enable_step_function_xray" {
triggers = {
state_machine_arn = xxxxxxx
}
provisioner "local-exec" {
command = "aws stepfunctions update-state-machine --state-machine-arn ${self.triggers.state_machine_arn} --tracing-configuration enabled=true"
}
}
This works fine when I tested via local Terraform, my question is if this will work if I apply Terraform on Concourse?
It depends entirely on if you have the Concourse job configured to use a container image that has the aws cli tool installed. If the AWS CLI tool is installed and in the path then the local-exec should succeed. If not, then it will obviously fail.
My assumption is that in your local machine, you've already set up the required credentials. So if you simply try it on Concourse CI it will fail with an authentication error.
To set it up in Concourse -
AWS Console - Create a new IAM user cicd with programmatic access only and the relevant permissions. For testing purposes, you can use the AdministratorAcess policy, but make sure to make it least-privileged later on.
AWS Console - Create AWS security credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for the cicd user (save them in a safe place)
Concourse CI - Create the secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
Concourse CI - Add ((AWS_ACCESS_KEY_ID)) and ((AWS_SECRET_ACCESS_KEY)) environment variables to your Concourse CI task
I'm sure there are many tutorials about this subject, but the above steps will probably appear in most of these tutorials. Concourse CI should now be able to apply changes on your AWS account.
I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env
I am trying to create an SFTP user with the help of AWS CLI in my Linux Box.
Below is the AWS CLI command which I am passing in my bash script (my ssh public key is in a file, with the help of variable I am passing same into AWS CLI options section)
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
Below is the ERROR which I am facing while executing above commands:
[developer#dev-lin demo]$ aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: developer#dev-lin.domain.com, XXXXXXXXXXAB3NzaC1yc2EAAAADAQABAAABAQCm2hI3Y33K1GVbdQV0lfkm/klZRJS7Kcz8+53e/BoIbVMFH0jqm1aejELDFgPnN7HvIZ/csYGzF/ssTx5lXVaHQh/qkYwfqQBg8WvXVB0Jmogj1hr6z5M8Qy/3oCx0fSmh6e/Ekfk8vHhiHQlGZV3o8a2AW5SkP8IH/OgT6Bq+SMuB+xtSciVBZqSLI0OgYtOZ0MyxBzfLau1Tyegu5lVFevZDVjecnIaS4l+v2VIQ/OgaZ40oAI3NuRZ2EdnLqEqFyLjasx4kcuwNzD5oaXAU6T9UsqKN2rVLMKrXXXXXXXXXXX
Am I missing something bash syntax while passing option value!
UPDATE 30-March-2020
as per suggestions in below comments, I have added AWS ARN Role in command, now facing different issue than previous
CODE:
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body "$customer_name_pub_value" --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role"
ERROR
An error occurred (ValidationException) when calling the CreateUser operation: 1 validation error detected: Value 'script-test/power-archive-ireland/demo/' at 'homeDirectory' failed to satisfy constraint: Member must satisfy regular expression pattern: ^$|/.*
Yes, for the final bug, you should feed it as a list of objects:
--tags [{Key="Product", Value="demo"}, {Key="Environment", Value="dev"}, {Key="Contact", Value="dev.user#domain.com"}, {Key="Service", Value="sftp"
You may need to put "Key" and "Value" in quotes or even perhaps have to try key:value pairs (i.e. {"Product": "demo"}), but this should be the general syntax.
Below is the final working CLI command:
Changes
Added ROLE ARN (Thanks #user1394 for the suggestion)
Biggest issue resolved by placing / before --home-directory option (bad AWS documentation (https://docs.aws.amazon.com/cli/latest/reference/transfer/create-user.html) and their out-dated RegEx ^$|/.*)
Transform the broken CLI into JSON based CLI to fix the final bug (not all the tags were able to attach in old command)
#!/bin/bash
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user \
--user-name $customer_name \
--server-id s-aaabbbccc \
--role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role" \
--ssh-public-key-body "$customer_name_pub_value" \
--home-directory /script-test/power-archive-ireland/$customer_name \
--tags '[
{"Key": "Product", "Value": "demo"},
{"Key": "Environment", "Value": "dev"},
{"Key": "Contact", "Value": "dev.user#domain.com"},
{"Key": "Service", "Value": "sftp"}
]'
My IAM account has "admin" privilege, at least supposedly. I can perform all operations as far as I can tell in web console. For example,
Recently I downloaded aws-cli and quickly configured it by supplying access keys, default region and output format. I then tried to issue some commands and found most of them, but not all, have permission issues. For example
$ aws --version
aws-cli/1.16.243 Python/3.7.4 Windows/10 botocore/1.12.233
$ aws s3 ls s3://test-bucket
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
$ aws ec2 describe-instances
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
$ aws iam get-user
{
"User": {
"Path": "/",
"UserName": "xxx#xxx.xxx",
"UserId": "xxxxx",
"Arn": "arn:aws:iam::nnnnnnnnnn:user/xxx#xxx.xxx",
"CreateDate": "2019-08-21T17:09:25Z",
"PasswordLastUsed": "2019-09-21T16:11:34Z"
}
}
It appears to me that cli, which is authenticated using access key, has a different permission set from web console, which is authenticated using MFA.
Why is permission inconsistent between CLI and GUI? How to make it consistent?
It turns out following statement in one of my policies blocked CLI access due to lacking MFA.
{
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
},
"Resource": "*",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
],
"Sid": "DenyAllExceptListedIfNoMFA"
},
If you replace BoolIfExists with Bool, it should work. Your CLI requests would not be denied because of not using MFA.
Opposite of https://aws.amazon.com/premiumsupport/knowledge-center/mfa-iam-user-aws-cli/
To remain really secure check this good explanation: MFA token for AWS CLI
In few steps
Get a temporary 36 hours session token.
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user --token-code code-from-token
{
"Credentials": {
"SecretAccessKey": "secret-access-key",
"SessionToken": "temporary-session-token",
"Expiration": "expiration-date-time",
"AccessKeyId": "access-key-id"
}
}
Save these values in a mfa profile configuration.
[mfa]
aws_access_key_id = example-access-key-as-in-returned-output
aws_secret_access_key = example-secret-access-key-as-in-returned-output
aws_session_token = example-session-Token-as-in-returned-output
Call with the profile
aws --profile mfa
Ps: Don't do the cron job as suggested, it goes again the security.
I had this same issue and I fixed it by adding my user to a new group with administrator access in IAM.
to do this go to IAM, Users, click on your user and then [add permissions]
in the next screen click [Create group] and then pick administrator access
I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.