StackSets with SERVICE_MANAGED permission model can only have OrganizationalUnit as target - aws-cli

I writing a AWS CLI delete script as below to delete service managed accounts
$opsResponse = aws cloudformation --region $region.RegionName delete-stack-instances --call-as DELEGATED_ADMIN --stack-set-name $stackDetails.StackSetName --accounts $accountIds --regions $multiDeleteRegions --no-retain-stacks --operation-preferences MaxConcurrentCount=1 | ConvertFrom-Json
Error I am getting
An error occurred (ValidationError) when calling the DeleteStackInstances operation: StackSets with SERVICE_MANAGED permission model can only have OrganizationalUnit as target
I tried adding --organizational-unit-id $ouId but that did not work.
Any idea how to delete service managed accounts via cli?

Found an answer - https://docs.aws.amazon.com/cli/latest/reference/cloudformation/delete-stack-instances.html
Used --deployment-targets
$opsResponse = aws cloudformation --region $region.RegionName delete-stack-instances --call-as DELEGATED_ADMIN --stack-set-name $stackDetails.StackSetName --deployment-targets Accounts=$accountIds,OrganizationalUnitIds=$ouId,AccountFilterType=UNION --regions $multiDeleteRegions --no-retain-stacks --operation-preferences MaxConcurrentCount=1 | ConvertFrom-Json

Related

Why am I having AWS credential errors in my AWS terminal setup?

Wanted to learn AWS and found the tutorial Build a Serverless Web Application. In my research the closest Q&A I could find for my issue was Unable to locate credentials aws cli.
My process has been:
Created a repo in Github
Navigated to IAM and created a user trainer. Tutorial didn't specify policies so chose AdministratorAccess. Per instructions went the Security credentials and Create access key. Downloaded the file locally.
Went to Configuration basics and did Importing a key pair via .CSV file with the command of:
aws configure import --csv file:///Users/path/to/file/aws-training.csv
params:
User name: trainer
Access key ID: ****57
Secret access key: *****1b
but then found that the file didn't contain region or format so did:
aws configure --profile trainer
and re-did all values based on the CSV (Quick Setup):
AWS Access Key ID: ****57
AWS Secret Access Key: *****1b
Default region name: us-east-1
Default output format: json
Made sure to reboot my terminal and locally in a directory I run the command:
aws s3 cp s3://wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website ./ --recursive
The terminal has a delay then throws:
fatal error: Unable to locate credentials
Research
Q&As I've read through to try and see if I could diagnose the problem:
aws cli with shell script: upload failed: Unable to locate credentials
Bash with AWS CLI - unable to locate credentials
Unable to locate credentials aws cli
Unable to locate credentials in boto3 AWS
Get "fatal error: Unable to locate credentials" when I'm copying file from S3 to EC2 using aws cli
Unable to locate credentials when trying to copy files from s3-bucket to my ec2-instance
How can I resolve my error of Unable to locate credentials and what am I doing wrong or misunderstanding?
Per the comment:
Check the content of ~/.aws/credentials and ~/.aws/config
credentials
command:
nano ~/.aws/credentials
renders:
[training]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
[trainer]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
config
command:
nano ~/.aws/config
renders:
[profile training]
region = us-east-1
output = json
[profile trainer]
region = us-east-1
output = json
You've configured the profile with the name trainer. You didn't create a default profile, you created a named profile. You're getting the current error because the CLI tool is looking for a default profile, and you don't have one configured.
In order to use the trainer profile you either have to add --profile trainer to every aws command you run in the command line, or you need to set the AWS_PROFILE environment variable inside your command line environment:
export AWS_PROFILE=trainer
It looks like you also tagged this with nodejs, so I recommend going the environment variable route, which will also work with the nodeJS AWS SDK.

Setting EC2 Environment Variables with CodeDeploy, Parameter Store and PM2

I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env

how to provide a file content as an aws cli option value

I am trying to create an SFTP user with the help of AWS CLI in my Linux Box.
Below is the AWS CLI command which I am passing in my bash script (my ssh public key is in a file, with the help of variable I am passing same into AWS CLI options section)
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
Below is the ERROR which I am facing while executing above commands:
[developer#dev-lin demo]$ aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: developer#dev-lin.domain.com, XXXXXXXXXXAB3NzaC1yc2EAAAADAQABAAABAQCm2hI3Y33K1GVbdQV0lfkm/klZRJS7Kcz8+53e/BoIbVMFH0jqm1aejELDFgPnN7HvIZ/csYGzF/ssTx5lXVaHQh/qkYwfqQBg8WvXVB0Jmogj1hr6z5M8Qy/3oCx0fSmh6e/Ekfk8vHhiHQlGZV3o8a2AW5SkP8IH/OgT6Bq+SMuB+xtSciVBZqSLI0OgYtOZ0MyxBzfLau1Tyegu5lVFevZDVjecnIaS4l+v2VIQ/OgaZ40oAI3NuRZ2EdnLqEqFyLjasx4kcuwNzD5oaXAU6T9UsqKN2rVLMKrXXXXXXXXXXX
Am I missing something bash syntax while passing option value!
UPDATE 30-March-2020
as per suggestions in below comments, I have added AWS ARN Role in command, now facing different issue than previous
CODE:
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body "$customer_name_pub_value" --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role"
ERROR
An error occurred (ValidationException) when calling the CreateUser operation: 1 validation error detected: Value 'script-test/power-archive-ireland/demo/' at 'homeDirectory' failed to satisfy constraint: Member must satisfy regular expression pattern: ^$|/.*
Yes, for the final bug, you should feed it as a list of objects:
--tags [{Key="Product", Value="demo"}, {Key="Environment", Value="dev"}, {Key="Contact", Value="dev.user#domain.com"}, {Key="Service", Value="sftp"
You may need to put "Key" and "Value" in quotes or even perhaps have to try key:value pairs (i.e. {"Product": "demo"}), but this should be the general syntax.
Below is the final working CLI command:
Changes
Added ROLE ARN (Thanks #user1394 for the suggestion)
Biggest issue resolved by placing / before --home-directory option (bad AWS documentation (https://docs.aws.amazon.com/cli/latest/reference/transfer/create-user.html) and their out-dated RegEx ^$|/.*)
Transform the broken CLI into JSON based CLI to fix the final bug (not all the tags were able to attach in old command)
#!/bin/bash
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user \
--user-name $customer_name \
--server-id s-aaabbbccc \
--role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role" \
--ssh-public-key-body "$customer_name_pub_value" \
--home-directory /script-test/power-archive-ireland/$customer_name \
--tags '[
{"Key": "Product", "Value": "demo"},
{"Key": "Environment", "Value": "dev"},
{"Key": "Contact", "Value": "dev.user#domain.com"},
{"Key": "Service", "Value": "sftp"}
]'

Accessing SSM variables with Serverless

I would like to use SSM Parameters in Serverless Variables.
Following the docs, I ran this command:
aws ssm put-parameter --name foo --value bar --type SecureString
And I added this to my serverless.yml:
custom:
foo: ${ssm:foo}
When I deploy, I get this warning however:
Serverless Warning --------------------------------------
A valid SSM parameter to satisfy the declaration 'ssm:foo' could not be found.
How do I access this variable? Thanks!
I needed to set the same region for both the serverless function, as well as the ssm variable assignment:
aws ssm put-parameter --name foo--value bar --type SecureString --region us-east-1
if the parameter is a SecureString, you need to add ~true after the path to the parameter on the serverless.yml file, as explained here: https://serverless.com/framework/docs/providers/aws/guide/variables#reference-variables-using-the-ssm-parameter-store
This will tell the framework to decrypt the value. Make sure that you have permissions to use the key used to encrypt the parameter.
Check your IAM policy. To get the parameters, the user doing the deployment needs access to SSM. This offers full access. See the docs to narrow it down a bit (ie: GetParameters, GetParameter).
"Effect": "Allow",
"Action": [
"ssm:*"
],
"Resource": [
"*"
]
Add this to the provider section in serverless.yml file
iamRoleStatements:
- Effect: "Allow"
Action:
- "ssm:GetParameters"
Resource: "*"
to use SSM variables, you need to prefix /aws/reference/secretsmanager/
example
${ssm:/aws/reference/secretsmanager/${self:provider.stage}/service/mysecret~true}

AWS EC2 spot instance --block-duration-minutes not working

I was trying to request a spot instance using CLI. I used below command to request a spot instance:
aws ec2 request-spot-instances --spot-price "0.050" --instance-count 1 --block-duration-minutes 120 --type "one-time" --launch-specification file://Spot_P2_request.json --query 'SpotInstanceRequests[*].SpotInstanceRequestId' --output text
I get below error:
Unknown options: --block-duration-minutes, 120
Is block-duration-minutes not supported by CLI?

Resources