How to run AWS CLI commands inside jenkinsfile in a loop - groovy

I am trying to create a jenkins job that when triggered will update all the Elasticache Redis replication groups with a certain tag.
The main workflow is that I find all the Redis replica groups in a region for example us-east-1
def findAllRedisReplicationGroups(label, region) {
query = "ReplicationGroups[*].{arn: ARN}"
output = sh(
script: "aws --region ${region} elasticache describe-replication-groups --query '${query}' --output text",
label: label,
returnStdout: true
)
return output
}
The output will be a string example here
String a = """
arn:aws:elasticache:us-west-2:AccountID:replicationgroup:application-2-test
arn:aws:elasticache:us-west-2:AccountID:replicationgroup:application-1-test
"""
Then I split the string into a list with each arn being an element.
Then using the for loop I will iterate through all the Redis replica groups and get their tags, if the tag is like Environment: test then the arn of the Redis replica group will be added to the list of arn
def findCorrectEnvReplicationGroups(label, region, environment, redis_arns){
def arn_list = redis_arns.split();
def correct_env_arn_list = [];
for( def arn : arn_list) {
def redisTags = getRedisTags(label, region, arn)
def jsonSlurper = new groovy.json.JsonSlurper()
def object = jsonSlurper.parseText(redisTags)
EnvironmentFromTag = object.TagList.find { it.Key == "Environment"}
if (EnvironmentFromTag.Value == environment) {
correct_env_arn_list.add(arn)
}
break
}
return correct_env_arn_list
}
def getRedisTags(label, region, arn) {
output = sh(
script: "aws --region ${region} elasticache list-tags-for-resource --resource-name ${arn} --output json",
label: label,
returnStdout: true
)
return output
}
I get through 1 loop. Tested by printing out the arn for each cycle but it crashes when trying to run the script in the getRedisTags method again.
The output should be a list of arns whose tags matches
Has anyone come across such an error or has any experience with groovy and can maybe help
me out on why the jenkinsfile crashes when trying to run the aws cli command in a loop
many thanks

Still not totally sure why it didnt work but with using groovy-s built in iterators I got it working
First I get a list of all the Redis Replication Groups
Then insert that into findEnvironmentReplicationGroups where I use the .findAll to input the arns one by one into the method getRedisTags which returns me a map (example = [Environment: "test"]) each time
Then I store the output [Environment: "test"] into the variable tags and check if it matches the environment given to the method
If it matches then it is added to correctArns which is ultimately returned back
def findAllRedisReplicationGroups(region) {
def query = "ReplicationGroups[*].ARN"
def output = sh(
script: "aws --region ${region} elasticache describe-replication-groups --query '${query}' --output json",
label: "Find all Redis Replication groups in the region",
returnStdout: true
)
readJSON(text: output)
}
def findEnvironmentReplicationGroups(region, environment, listOfRedisArns){
def correctArns = listOfRedisArns.findAll { arn ->
def tags = getRedisTags(region, arn).collectEntries { [it.Key, it.Value] }
tags.Environment == environment
}
correctArns
}
def getRedisTags(region, arn) {
def query = "TagList[?Key==`Environment`]"
def output = sh(
script: "aws --region ${region} elasticache list-tags-for-resource --resource-name ${arn} --query '${query}' --output json",
label: "Get tags for a Redis replication group",
returnStdout: true
)
readJSON(text: output)
}

Related

Get all tags for resources retrieved with AWS Config

I'm using an AWS Custom Config Rule created with Lambda. I'm using an example from the official AWS docs (Example Function for Periodic Evaluations
)
resource_identifiers = []
for resource_type in resource_types:
resource_identifiers.append(AWS_CONFIG_CLIENT.list_discovered_resources(resourceType=resource_type))
The above works fine. I end up with a list of dicts (all resources with different resource types). A dict looks like this. (source)
{
'resourceIdentifiers': [
{
'resourceType': 'AWS::EC2::CustomerGateway'|'AWS::EC2::EIP'|'AWS::EC2::Host'|'AWS::EC2::Instance'|'AWS::EC2::InternetGateway'|'AWS::EC2::NetworkAcl'|'AWS::EC2::NetworkInterface'|'AWS::EC2::RouteTable'|'AWS::EC2::SecurityGroup'|'AWS::EC2::Subnet'|'AWS::CloudTrail::Trail'|'AWS::EC2::Volume'|'AWS::EC2::VPC'|'AWS::EC2::VPNConnection'|'AWS::EC2::VPNGateway'|'AWS::EC2::RegisteredHAInstance'|'AWS::EC2::NatGateway'|'AWS::EC2::EgressOnlyInternetGateway'|'AWS::EC2::VPCEndpoint'|'AWS::EC2::VPCEndpointService'|'AWS::EC2::FlowLog'|'AWS::EC2::VPCPeeringConnection'|'AWS::Elasticsearch::Domain'|'AWS::IAM::Group'|'AWS::IAM::Policy'|'AWS::IAM::Role'|'AWS::IAM::User'|'AWS::ElasticLoadBalancingV2::LoadBalancer'|'AWS::ACM::Certificate'|'AWS::RDS::DBInstance'|'AWS::RDS::DBSubnetGroup'|'AWS::RDS::DBSecurityGroup'|'AWS::RDS::DBSnapshot'|'AWS::RDS::DBCluster'|'AWS::RDS::DBClusterSnapshot'|'AWS::RDS::EventSubscription'|'AWS::S3::Bucket'|'AWS::S3::AccountPublicAccessBlock'|'AWS::Redshift::Cluster'|'AWS::Redshift::ClusterSnapshot'|'AWS::Redshift::ClusterParameterGroup'|'AWS::Redshift::ClusterSecurityGroup'|'AWS::Redshift::ClusterSubnetGroup'|'AWS::Redshift::EventSubscription'|'AWS::SSM::ManagedInstanceInventory'|'AWS::CloudWatch::Alarm'|'AWS::CloudFormation::Stack'|'AWS::ElasticLoadBalancing::LoadBalancer'|'AWS::AutoScaling::AutoScalingGroup'|'AWS::AutoScaling::LaunchConfiguration'|'AWS::AutoScaling::ScalingPolicy'|'AWS::AutoScaling::ScheduledAction'|'AWS::DynamoDB::Table'|'AWS::CodeBuild::Project'|'AWS::WAF::RateBasedRule'|'AWS::WAF::Rule'|'AWS::WAF::RuleGroup'|'AWS::WAF::WebACL'|'AWS::WAFRegional::RateBasedRule'|'AWS::WAFRegional::Rule'|'AWS::WAFRegional::RuleGroup'|'AWS::WAFRegional::WebACL'|'AWS::CloudFront::Distribution'|'AWS::CloudFront::StreamingDistribution'|'AWS::Lambda::Function'|'AWS::NetworkFirewall::Firewall'|'AWS::NetworkFirewall::FirewallPolicy'|'AWS::NetworkFirewall::RuleGroup'|'AWS::ElasticBeanstalk::Application'|'AWS::ElasticBeanstalk::ApplicationVersion'|'AWS::ElasticBeanstalk::Environment'|'AWS::WAFv2::WebACL'|'AWS::WAFv2::RuleGroup'|'AWS::WAFv2::IPSet'|'AWS::WAFv2::RegexPatternSet'|'AWS::WAFv2::ManagedRuleSet'|'AWS::XRay::EncryptionConfig'|'AWS::SSM::AssociationCompliance'|'AWS::SSM::PatchCompliance'|'AWS::Shield::Protection'|'AWS::ShieldRegional::Protection'|'AWS::Config::ConformancePackCompliance'|'AWS::Config::ResourceCompliance'|'AWS::ApiGateway::Stage'|'AWS::ApiGateway::RestApi'|'AWS::ApiGatewayV2::Stage'|'AWS::ApiGatewayV2::Api'|'AWS::CodePipeline::Pipeline'|'AWS::ServiceCatalog::CloudFormationProvisionedProduct'|'AWS::ServiceCatalog::CloudFormationProduct'|'AWS::ServiceCatalog::Portfolio'|'AWS::SQS::Queue'|'AWS::KMS::Key'|'AWS::QLDB::Ledger'|'AWS::SecretsManager::Secret'|'AWS::SNS::Topic'|'AWS::SSM::FileData'|'AWS::Backup::BackupPlan'|'AWS::Backup::BackupSelection'|'AWS::Backup::BackupVault'|'AWS::Backup::RecoveryPoint'|'AWS::ECR::Repository'|'AWS::ECS::Cluster'|'AWS::ECS::Service'|'AWS::ECS::TaskDefinition'|'AWS::EFS::AccessPoint'|'AWS::EFS::FileSystem'|'AWS::EKS::Cluster'|'AWS::OpenSearch::Domain'|'AWS::EC2::TransitGateway'|'AWS::Kinesis::Stream'|'AWS::Kinesis::StreamConsumer'|'AWS::CodeDeploy::Application'|'AWS::CodeDeploy::DeploymentConfig'|'AWS::CodeDeploy::DeploymentGroup'|'AWS::EC2::LaunchTemplate'|'AWS::ECR::PublicRepository'|'AWS::GuardDuty::Detector'|'AWS::EMR::SecurityConfiguration'|'AWS::SageMaker::CodeRepository'|'AWS::Route53Resolver::ResolverEndpoint'|'AWS::Route53Resolver::ResolverRule'|'AWS::Route53Resolver::ResolverRuleAssociation'|'AWS::DMS::ReplicationSubnetGroup'|'AWS::DMS::EventSubscription'|'AWS::MSK::Cluster'|'AWS::StepFunctions::Activity'|'AWS::WorkSpaces::Workspace'|'AWS::WorkSpaces::ConnectionAlias'|'AWS::SageMaker::Model'|'AWS::ElasticLoadBalancingV2::Listener'|'AWS::StepFunctions::StateMachine'|'AWS::Batch::JobQueue'|'AWS::Batch::ComputeEnvironment'|'AWS::AccessAnalyzer::Analyzer'|'AWS::Athena::WorkGroup'|'AWS::Athena::DataCatalog'|'AWS::Detective::Graph'|'AWS::GlobalAccelerator::Accelerator'|'AWS::GlobalAccelerator::EndpointGroup'|'AWS::GlobalAccelerator::Listener'|'AWS::EC2::TransitGatewayAttachment'|'AWS::EC2::TransitGatewayRouteTable'|'AWS::DMS::Certificate',
'resourceId': 'string',
'resourceName': 'string',
'resourceDeletionTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Now how I can I retrieve the tags for each resource? The resource type can be different. There is a method list_tags_for_resource but it requires the resource_arn which I don't know. I only know id, type and name. I can try to substitute an arn for each type but this will take too long and will be too complex. Then I have to initiate a client for each resource_type and request the tags.
Is there a clear way on how to retrieve the tags for a resource?
you will just need to find the ARN of each resource, at time they will not show in AWS console. Here are examples of 3 types of resources & their ARN
arn:aws:ec2:us-west-2:xxx:ec2/vol-xxx
arn:aws:ec2:us-west-2:xxx:snapshot/snap-xxx
arn:aws:ec2:us-west-2:xxx:instance/i-xxx
Then get tags like below
aws resourcegroupstaggingapi get-resources --profile xxx--region us-east-1
import boto3
AWS_REGION = "us-east-1"
AWS_PROFILE = "xxx"
session=boto3.session.Session(profile_name=AWS_PROFILE)
#
client = session.client('resourcegroupstaggingapi',region_name=AWS_REGION)
client.get_resources(
TagFilters=[
{
'Key': 'Owner',
'Values': [
'xxxx'
]
},
],
ResourceTypeFilters=[
's3'
]
)
To get a list of all tag values
The following get-tag-values example displays all of the values used for the specified key for all resources in the
aws resourcegroupstaggingapi get-tag-values \
--key=Environment

Terraform aws_lambda_function Requires Docker Image In ECR

I have a module that creates all the infrastructure needed for a lambda including the ECR that stores the image:
resource "aws_ecr_repository" "image_storage" {
name = "${var.project}/${var.environment}/lambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
resource "aws_lambda_function" "executable" {
function_name = var.function_name
image_uri = "${aws_ecr_repository.image_storage.repository_url}:latest"
package_type = "Image"
role = aws_iam_role.lambda.arn
}
The problem with this of course is that it fails because when aws_lambda_function runs the repository is there but the image is not: the image is uploaded using my CI/CD.
So this is a chicken egg problem. Terraform is supposed to only be used for infrastructure so I cannot/should not use it to upload an image (even a dummy one) but I cannot instantiate the infrastructure unless the image is uploaded in between repository and lambda creation steps.
The only solution I can think of is to create ECR separately from the lambda and then somehow link it as an existing aws resource in my lambda but that seems kind of clumsy.
Any suggestions?
I ended up using the following solution where a dummy image is uploaded as part resource creation.
resource "aws_ecr_repository" "listing" {
name = "myLambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
provisioner "local-exec" {
command = <<-EOT
docker pull alpine
docker tag alpine dummy_container
docker push dummy_container
EOT
}
}
Building off #przemek-lach's answer plus #halloei's comment, I wanted to post a fully-working ECR repository that gets provisioned with a dummy image
data "aws_ecr_authorization_token" "token" {}
resource "aws_ecr_repository" "repository" {
name = "lambda-${local.name}-${local.environment}"
image_tag_mutability = "MUTABLE"
tags = local.common_tags
image_scanning_configuration {
scan_on_push = true
}
lifecycle {
ignore_changes = all
}
provisioner "local-exec" {
# This is a 1-time execution to put a dummy image into the ECR repo, so
# terraform provisioning works on the lambda function. Otherwise there is
# a chicken-egg scenario where the lambda can't be provisioned because no
# image exists in the ECR
command = <<EOF
docker login ${data.aws_ecr_authorization_token.token.proxy_endpoint} -u AWS -p ${data.aws_ecr_authorization_token.token.password}
docker pull alpine
docker tag alpine ${aws_ecr_repository.repository.repository_url}:SOME_TAG
docker push ${aws_ecr_repository.repository.repository_url}:SOME_TAG
EOF
}
}

Terraform - multi-line JSON to single line?

I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function

terraform aws api gateway configure method throttling per each api key

I want to configure usage plan, api key and method like below.
Basically one aws api gateway has 10 methods, i want to configure different rate for each resource
usage plan api key Resource Method Rate (requests per second)
usage plan1 apiKey1 /a POST 1 qps
usage plan1 apiKey1 /b POST 2 qps
usage plan2 apiKey2 /a POST 4 qps
usage plan2 apiKey2 /b POST 6 qps
But in aws_api_gateway_usage_plan i can only find usage plan setting for stage.
What terraform resource can i use to configure usage plan
I want to achieve below feature Configure Method Throttling
After checking, i think until now, terraform does not support this feature.
However there is workaround by using aws cli commend.
Refer to this link:
https://github.com/terraform-providers/terraform-provider-aws/issues/5901
I quoted the work around here
variable "method_throttling" {
type = "list"
description = "example method throttling"
default = [
"\\\"/<RESOURCE1>/<METHOD1>\\\":{\\\"rateLimit\\\":400,\\\"burstLimit\\\":150}",
"\\\"/<RESOURCE2>/<METHOD2>\\\":{\\\"rateLimit\\\":1000,\\\"burstLimit\\\":303}"
]
}
# locals
locals {
# Delimiter for later usage
delimiter = "'"
# Base aws cli command
base_command = "aws apigateway update-usage-plan --usage-plan-id ${aws_api_gateway_usage_plan.usage_plan.id} --patch-operations op"
# Later aws cli command
base_path = "path=/apiStages/${var.api_gateway_rest_api_id}:${var.api_gateway_stage_name}/throttle,value"
# Join method throttling variable to string
methods_string = "${local.delimiter}\"{${join(",", var.method_throttling)}}\"${local.delimiter}"
}
resource "null_resource" "method_throttling" {
count = "${length(var.method_throttling) != 0 ? 1 : 0}"
# create method throttling
provisioner "local-exec" {
when = "create"
command = "${local.base_command}=add,${local.base_path}=${local.methods_string}"
on_failure = "continue"
}
# edit method throttling
provisioner "local-exec" {
command = "${local.base_command}=replace,${local.base_path}=${local.methods_string}"
on_failure = "fail"
}
# delete method throttling
provisioner "local-exec" {
when = "destroy"
command = "${local.base_command}=remove,${local.base_path}="
on_failure = "fail"
}
triggers = {
usage_plan_change = "${aws_api_gateway_usage_plan.usage_plan.id}"
methods_change = "${local.methods_string}"
}
depends_on = [
"aws_api_gateway_usage_plan.usage_plan"
]
}

Boto3: How to check if VPC already exists before creating it

I can create a VPC really quick like this:
import boto3 as boto
inst = boto.Session(profile_name='myprofile').resource('ec2')
def createVpc(nid,az='us-west-2'):
'''Create the VPC'''
vpc = inst.create_vpc(CidrBlock = '10.'+str(nid)+'.0.0/16')
vpc.create_tags(
Tags = [ { 'Key': 'Name', 'Value': 'VPC-'+nid }, ]
)
vpc.wait_until_available()
createVpc('111')
How can I check a VPC with CidrBlock: 10.111.0.0/16 or a Name: VPC-111 already exists before it gets created? I actually wanna do the same check prior to any AWS resource creation but VPC is a start. Best!
EDIT:
found that vpcs.filter can be used to query a given VPC tags; e.g.:
fltr = [{'Name':'tag:Name', 'Values':['VPC-'+str(nid)]}]
list(inst.vpcs.filter(Filters=fltr))
which returns a list object like this: [ec2.Vpc(id='vpc-43e56b3b')]. A list with length 0 (zero) is a good indication of a non-existent VPC but was wondering if there is more boto/aws way of detecting that.
Yes you need to use filters with describe_vpcs API.
The below code will list all VPC's which matches both Name Tag Value and the CIDR block:
import boto3
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_vpcs(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'<Enter you VPC name here>',
]
},
{
'Name': 'cidr-block-association.cidr-block',
'Values': [
'10.0.0.0/16', #Enter you cidr block here
]
},
]
)
resp = response['Vpcs']
if resp:
print(resp)
else:
print('No vpcs found')
CIDR block is the primary check for VPC. I would suggest to just use the CIDR Filter alone instead of clubbing with Name Tag as then you can prevent creating VPC with same CIDR Blocks.

Resources