I can create a VPC really quick like this:
import boto3 as boto
inst = boto.Session(profile_name='myprofile').resource('ec2')
def createVpc(nid,az='us-west-2'):
'''Create the VPC'''
vpc = inst.create_vpc(CidrBlock = '10.'+str(nid)+'.0.0/16')
vpc.create_tags(
Tags = [ { 'Key': 'Name', 'Value': 'VPC-'+nid }, ]
)
vpc.wait_until_available()
createVpc('111')
How can I check a VPC with CidrBlock: 10.111.0.0/16 or a Name: VPC-111 already exists before it gets created? I actually wanna do the same check prior to any AWS resource creation but VPC is a start. Best!
EDIT:
found that vpcs.filter can be used to query a given VPC tags; e.g.:
fltr = [{'Name':'tag:Name', 'Values':['VPC-'+str(nid)]}]
list(inst.vpcs.filter(Filters=fltr))
which returns a list object like this: [ec2.Vpc(id='vpc-43e56b3b')]. A list with length 0 (zero) is a good indication of a non-existent VPC but was wondering if there is more boto/aws way of detecting that.
Yes you need to use filters with describe_vpcs API.
The below code will list all VPC's which matches both Name Tag Value and the CIDR block:
import boto3
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_vpcs(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'<Enter you VPC name here>',
]
},
{
'Name': 'cidr-block-association.cidr-block',
'Values': [
'10.0.0.0/16', #Enter you cidr block here
]
},
]
)
resp = response['Vpcs']
if resp:
print(resp)
else:
print('No vpcs found')
CIDR block is the primary check for VPC. I would suggest to just use the CIDR Filter alone instead of clubbing with Name Tag as then you can prevent creating VPC with same CIDR Blocks.
Related
I would like to skip adding a vpc to lambda in certain env. The current terraform code to update vpc is like below
data "aws_subnet" "lambda-private-subnet_1" {
availability_zone = var.environment_type_tag != "prd" ? "us-east-1a" : null
dynamic "filter" {
for_each = var.environment_type_tag == "prd" ? [] : [1]
content {
name = "tag:Name"
values = [var.subnet_value]
}
}
}
resource "aws_lambda_function" "tests" {
dynamic "vpc_config" {
for_each = var.environment_type_tag == "prd" ? [] : [1]
content {
subnet_ids = [data.aws_subnet.lambda-private-subnet_1.id]
security_group_ids = [var.security_group]
}
}
}
During 'terraform plan', the output is like below
##[error][1m[31mError: [0m[0m[1mmultiple EC2 Subnets matched; use additional constraints to reduce matches to a single EC2 Subnet[0m
I would like to skip the 'data "aws_subnet"' block if its 'prd' environment type.
So there are four different questions in this question. We can attempt to answer each one:
dynamic block to skip vpc config to lambda
This is already occurring with the given code. The dynamic blocks are "skipped" in prd with the current code.
I would like to skip adding a vpc to lambda in certain env.
If you mean "subnet" instead of "vpc", then this is also already occurring with the given code. Otherwise, please update with the vpc config.
##[error][1m[31mError: [0m[0m[1mmultiple EC2 Subnets matched; use additional constraints to reduce matches to a single EC2 Subnet[0m
The error message is due to the fact that your filters match multiple subnets outside of prd, and therefore you need to constrain the filter conditions.
I would like to skip the 'data "aws_subnet"' block if its 'prd' environment type.
You just need to extend your current code to make the data optional:
data "aws_subnet" "lambda-private-subnet_1" {
for_each = var.environment_type_tag == "prd" ? [] : toset(["this"])
...
}
You can then remove the for_each from the dynamic block in the resource as it is redundant, and update the attribute references with elements accordingly:
subnet_ids = [data.aws_subnet.lambda-private-subnet_1["this"].id]
I'm using an AWS Custom Config Rule created with Lambda. I'm using an example from the official AWS docs (Example Function for Periodic Evaluations
)
resource_identifiers = []
for resource_type in resource_types:
resource_identifiers.append(AWS_CONFIG_CLIENT.list_discovered_resources(resourceType=resource_type))
The above works fine. I end up with a list of dicts (all resources with different resource types). A dict looks like this. (source)
{
'resourceIdentifiers': [
{
'resourceType': 'AWS::EC2::CustomerGateway'|'AWS::EC2::EIP'|'AWS::EC2::Host'|'AWS::EC2::Instance'|'AWS::EC2::InternetGateway'|'AWS::EC2::NetworkAcl'|'AWS::EC2::NetworkInterface'|'AWS::EC2::RouteTable'|'AWS::EC2::SecurityGroup'|'AWS::EC2::Subnet'|'AWS::CloudTrail::Trail'|'AWS::EC2::Volume'|'AWS::EC2::VPC'|'AWS::EC2::VPNConnection'|'AWS::EC2::VPNGateway'|'AWS::EC2::RegisteredHAInstance'|'AWS::EC2::NatGateway'|'AWS::EC2::EgressOnlyInternetGateway'|'AWS::EC2::VPCEndpoint'|'AWS::EC2::VPCEndpointService'|'AWS::EC2::FlowLog'|'AWS::EC2::VPCPeeringConnection'|'AWS::Elasticsearch::Domain'|'AWS::IAM::Group'|'AWS::IAM::Policy'|'AWS::IAM::Role'|'AWS::IAM::User'|'AWS::ElasticLoadBalancingV2::LoadBalancer'|'AWS::ACM::Certificate'|'AWS::RDS::DBInstance'|'AWS::RDS::DBSubnetGroup'|'AWS::RDS::DBSecurityGroup'|'AWS::RDS::DBSnapshot'|'AWS::RDS::DBCluster'|'AWS::RDS::DBClusterSnapshot'|'AWS::RDS::EventSubscription'|'AWS::S3::Bucket'|'AWS::S3::AccountPublicAccessBlock'|'AWS::Redshift::Cluster'|'AWS::Redshift::ClusterSnapshot'|'AWS::Redshift::ClusterParameterGroup'|'AWS::Redshift::ClusterSecurityGroup'|'AWS::Redshift::ClusterSubnetGroup'|'AWS::Redshift::EventSubscription'|'AWS::SSM::ManagedInstanceInventory'|'AWS::CloudWatch::Alarm'|'AWS::CloudFormation::Stack'|'AWS::ElasticLoadBalancing::LoadBalancer'|'AWS::AutoScaling::AutoScalingGroup'|'AWS::AutoScaling::LaunchConfiguration'|'AWS::AutoScaling::ScalingPolicy'|'AWS::AutoScaling::ScheduledAction'|'AWS::DynamoDB::Table'|'AWS::CodeBuild::Project'|'AWS::WAF::RateBasedRule'|'AWS::WAF::Rule'|'AWS::WAF::RuleGroup'|'AWS::WAF::WebACL'|'AWS::WAFRegional::RateBasedRule'|'AWS::WAFRegional::Rule'|'AWS::WAFRegional::RuleGroup'|'AWS::WAFRegional::WebACL'|'AWS::CloudFront::Distribution'|'AWS::CloudFront::StreamingDistribution'|'AWS::Lambda::Function'|'AWS::NetworkFirewall::Firewall'|'AWS::NetworkFirewall::FirewallPolicy'|'AWS::NetworkFirewall::RuleGroup'|'AWS::ElasticBeanstalk::Application'|'AWS::ElasticBeanstalk::ApplicationVersion'|'AWS::ElasticBeanstalk::Environment'|'AWS::WAFv2::WebACL'|'AWS::WAFv2::RuleGroup'|'AWS::WAFv2::IPSet'|'AWS::WAFv2::RegexPatternSet'|'AWS::WAFv2::ManagedRuleSet'|'AWS::XRay::EncryptionConfig'|'AWS::SSM::AssociationCompliance'|'AWS::SSM::PatchCompliance'|'AWS::Shield::Protection'|'AWS::ShieldRegional::Protection'|'AWS::Config::ConformancePackCompliance'|'AWS::Config::ResourceCompliance'|'AWS::ApiGateway::Stage'|'AWS::ApiGateway::RestApi'|'AWS::ApiGatewayV2::Stage'|'AWS::ApiGatewayV2::Api'|'AWS::CodePipeline::Pipeline'|'AWS::ServiceCatalog::CloudFormationProvisionedProduct'|'AWS::ServiceCatalog::CloudFormationProduct'|'AWS::ServiceCatalog::Portfolio'|'AWS::SQS::Queue'|'AWS::KMS::Key'|'AWS::QLDB::Ledger'|'AWS::SecretsManager::Secret'|'AWS::SNS::Topic'|'AWS::SSM::FileData'|'AWS::Backup::BackupPlan'|'AWS::Backup::BackupSelection'|'AWS::Backup::BackupVault'|'AWS::Backup::RecoveryPoint'|'AWS::ECR::Repository'|'AWS::ECS::Cluster'|'AWS::ECS::Service'|'AWS::ECS::TaskDefinition'|'AWS::EFS::AccessPoint'|'AWS::EFS::FileSystem'|'AWS::EKS::Cluster'|'AWS::OpenSearch::Domain'|'AWS::EC2::TransitGateway'|'AWS::Kinesis::Stream'|'AWS::Kinesis::StreamConsumer'|'AWS::CodeDeploy::Application'|'AWS::CodeDeploy::DeploymentConfig'|'AWS::CodeDeploy::DeploymentGroup'|'AWS::EC2::LaunchTemplate'|'AWS::ECR::PublicRepository'|'AWS::GuardDuty::Detector'|'AWS::EMR::SecurityConfiguration'|'AWS::SageMaker::CodeRepository'|'AWS::Route53Resolver::ResolverEndpoint'|'AWS::Route53Resolver::ResolverRule'|'AWS::Route53Resolver::ResolverRuleAssociation'|'AWS::DMS::ReplicationSubnetGroup'|'AWS::DMS::EventSubscription'|'AWS::MSK::Cluster'|'AWS::StepFunctions::Activity'|'AWS::WorkSpaces::Workspace'|'AWS::WorkSpaces::ConnectionAlias'|'AWS::SageMaker::Model'|'AWS::ElasticLoadBalancingV2::Listener'|'AWS::StepFunctions::StateMachine'|'AWS::Batch::JobQueue'|'AWS::Batch::ComputeEnvironment'|'AWS::AccessAnalyzer::Analyzer'|'AWS::Athena::WorkGroup'|'AWS::Athena::DataCatalog'|'AWS::Detective::Graph'|'AWS::GlobalAccelerator::Accelerator'|'AWS::GlobalAccelerator::EndpointGroup'|'AWS::GlobalAccelerator::Listener'|'AWS::EC2::TransitGatewayAttachment'|'AWS::EC2::TransitGatewayRouteTable'|'AWS::DMS::Certificate',
'resourceId': 'string',
'resourceName': 'string',
'resourceDeletionTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Now how I can I retrieve the tags for each resource? The resource type can be different. There is a method list_tags_for_resource but it requires the resource_arn which I don't know. I only know id, type and name. I can try to substitute an arn for each type but this will take too long and will be too complex. Then I have to initiate a client for each resource_type and request the tags.
Is there a clear way on how to retrieve the tags for a resource?
you will just need to find the ARN of each resource, at time they will not show in AWS console. Here are examples of 3 types of resources & their ARN
arn:aws:ec2:us-west-2:xxx:ec2/vol-xxx
arn:aws:ec2:us-west-2:xxx:snapshot/snap-xxx
arn:aws:ec2:us-west-2:xxx:instance/i-xxx
Then get tags like below
aws resourcegroupstaggingapi get-resources --profile xxx--region us-east-1
import boto3
AWS_REGION = "us-east-1"
AWS_PROFILE = "xxx"
session=boto3.session.Session(profile_name=AWS_PROFILE)
#
client = session.client('resourcegroupstaggingapi',region_name=AWS_REGION)
client.get_resources(
TagFilters=[
{
'Key': 'Owner',
'Values': [
'xxxx'
]
},
],
ResourceTypeFilters=[
's3'
]
)
To get a list of all tag values
The following get-tag-values example displays all of the values used for the specified key for all resources in the
aws resourcegroupstaggingapi get-tag-values \
--key=Environment
I have a terraform module that creates an S3 bucket based on a variable creates3bucket is true or false.
The resource block looks like this.
#Codepipeline s3 bucket artifact store
resource "aws_s3_bucket" "LambdaCodePipelineBucket" {
count = var.creates3bucket ? 1 : 0
bucket = var.lambdacodepipelinebucketname
}
I output the bucket arn in the outputs.tf file like this.
output "codepipelines3bucketarn"{
description = "CodePipeline S3 Bucket arn"
value = aws_s3_bucket.LambdaCodePipelineBucket[*].arn
}
From the calling module I want to pass this arn value in the bucket policy. This works fine when the bucket is not an indexed resource. But Terraform plan complains when there is a count associated with the bucket.
From the calling module I pass the bucket policy like this:
cps3bucketpolicy = jsonencode({
Version = "2012-10-17"
Id = "LambdaCodePipelineBucketPolicy"
Statement = [
{
Sid = "AllowPipelineRoles"
Effect = "Allow"
Principal = {
AWS = ["${module.lambdapipeline.codepipelinerolearn}"]
}
Action = "s3:*"
Resource = [
"${module.lambdapipeline.codepipelines3bucketarn}",
"${module.lambdapipeline.codepipelines3bucketarn}/*",
]
},
{
Sid : "AllowSSLRequestsOnly",
Effect : "Deny",
Principal : "*",
Action : "*",
Resource : [
"${module.lambdapipeline.codepipelines3bucketarn}",
"${module.lambdapipeline.codepipelines3bucketarn}/*",
],
Condition : {
Bool : {
"aws:SecureTransport" : "false"
}
}
}
]
})
Terraform Plan error: So for some reason once i added the count to the s3 bucket resource terraform does not like the "${module.lambdapipeline.codepipelines3bucketarn}/*" in the policy.
How do I pass the bucket arn in the policy from the calling module?
Like Marko E. wrote, you need to use the indexed resource. In your case, you should use this:
output "codepipelines3bucketarn"{
description = "CodePipeline S3 Bucket arn"
value = aws_s3_bucket.LambdaCodePipelineBucket[0].arn
}
But, in your case, your output would be empty, if the variable var.creates3bucket is false.
So I conclude, eighter the bucket is available or you will create it. If this is the case, use the data source for your policy.
data "aws_s3_bucket" "LambdaCodePipelineBucket" {
bucket = var.lambdacodepipelinebucketname
}
and change in your policy
"${module.lambdapipeline.codepipelines3bucketarn}"
to
"${data.aws_s3_bucket.LambdaCodePipelineBucket.arn"
Now, the only "error" will be, if the bucket is not available (then justs set your variable to true and the data source will find a bucket.
I would like to exclude a given string from the list of string in terraform
example:
I have following data source as a variable
region_list = data.oci_identity_region_subscriptions.region_subscriptions.region_subscriptions.*.region_name
Now, I would like to exclude a region from it. Region "us-ashburn-1"
exclude ("us-ashburn-1") form region_list. Any thoughts on how to do that?
Easiest way to get rid of a set of values in another set is to use setsubtract():
locals {
regions = ["us-west-2", "us-west-1", "us-east-2", "us-ashburn-1"]
}
output "excluded" {
value = setsubtract(local.regions, ["us-ashburn-1"])
}
outputs:
excluded = [
"us-east-2",
"us-west-1",
"us-west-2",
]
if you want to keep the order or duplicates in a list, then using a for expression as already mentioned in another answer is preferred.
You can do this by using for loop and if condition in terraform.
Example terraform configuration,
variable "regions" {
type = list
default = ["us-west-2", "us-west-1", "us-east-2", "us-east-1"]
}
output "excluded" {
value = [for region in var.regions : region if region != "us-east-1"]
}
The above config will output all the region except us-east-1.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
excluded = [
"us-west-2",
"us-west-1",
"us-east-2",
]
Question
If there a way to get the assigned IP address of an aws_lb resource at the time aws_lb is created by Terraform?
As in AWS documentation - NLB - To find the private IP addresses to whitelist, we can find out the IP address associated to ELB.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the navigation pane, choose Network Interfaces.
In the search field, type the name of your Network Load Balancer.
There is one network interface per load balancer subnet.
On the Details tab for each network interface, copy the address from
Primary private IPv4 IP.
Background
To be able to setup security group to white list the ELB IP address as Network Load Balancer cannot not have Security Group as in Network Load Balancers don't have Security Groups.
Considered aws_network_interface but it does not work with an error.
Error: no matching network interface found
Also I think datasource assumes the resource already exists and cannot be used for the resource to be created by Terraform.
More elegent solution using only HCL in Terraform :
data "aws_network_interface" "lb" {
for_each = var.subnets
filter {
name = "description"
values = ["ELB ${aws_lb.example_lb.arn_suffix}"]
}
filter {
name = "subnet-id"
values = [each.value]
}
}
resource "aws_security_group" "lb_sg" {
vpc_id = var.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.lb : eni.private_ip])
description = "Allow connection from NLB"
}
}
Source : https://github.com/terraform-providers/terraform-provider-aws/issues/3007
Hope this helps.
The solution from #user1297406 leads to an exeption. data.aws_network_interface.lb is tuple with 2 elements. Correct syntax is:
data "aws_network_interface" "lb" {
count = length(var.vpc_private_subnets)
filter {
name = "description"
values = ["ELB ${aws_alb.lb.arn_suffix}"]
}
filter {
name = "subnet-id"
values = [var.vpc_private_subnets[count.index]]
}
}
resource "aws_security_group_rule" "lb_sg" {
from_port = 0
protocol = "TCP"
to_port = 0
type = "ingress"
cidr_blocks = formatlist("%s/32", data.aws_network_interface.lb.*.private_ip)
}
Using external provider
Get the NLB IP using Python/boto3 invoking from external provider.
nlb_private_ips.tf
variable "nlb_name" {
}
variable "vpc_id" {
}
variable "region" {
}
data "external" "get_nlb_ips" {
program = ["python", "${path.module}/get_nlb_private_ips.py"]
query = {
aws_nlb_name = "${var.nlb_name}"
aws_vpc_id = "${var.vpc_id}"
aws_region = "${var.region}"
}
}
output "aws_nlb_ip_decoded" {
value = "${jsondecode(data.external.get_nlb_ips.result.private_ips)}"
}
output "aws_nlb_ip_encoded" {
value = "${data.external.get_nlb_ips.result.private_ips}"
}
get_nlb_private_ips.py
import boto3
import json
import sys
def json_serial(obj):
"""JSON serializer for objects not serializable by default json code
Args:
obj: object to serialize into JSON
"""
_serialize = {
"int": lambda o: int(o),
"float": lambda o: float(o),
"decimal": lambda o: float(o) if o % 1 > 0 else int(o),
"date": lambda o: o.isoformat(),
"datetime": lambda o: o.isoformat(),
"str": lambda o: o,
}
return _serialize[type(obj).__name__.lower()](obj)
def pretty_json(dict):
"""
Pretty print Python dictionary
Args:
dict: Python dictionary
Returns:
Pretty JSON
"""
return json.dumps(dict, indent=2, default=json_serial, sort_keys=True, )
def get_nlb_private_ips(data):
ec2 = boto3.client('ec2', region_name=data['aws_region'])
response = ec2.describe_network_interfaces(
Filters=[
{
'Name': 'description',
'Values': [
"ELB net/{AWS_NLB_NAME}/*".format(
AWS_NLB_NAME=data['aws_nlb_name'])
]
},
{
'Name': 'vpc-id',
'Values': [
data['aws_vpc_id']
]
},
{
'Name': 'status',
'Values': [
"in-use"
]
},
{
'Name': 'attachment.status',
'Values': [
"attached"
]
}
]
)
# print(pretty_json(response))
interfaces = response['NetworkInterfaces']
# ifs = list(map(lamba index: interfaces[index]['PrivateIpAddresses'], xrange(len(interfaces))))
# --------------------------------------------------------------------------------
# Private IP addresses associated to an interface (ENI)
# Each association has the format:
# {
# "Association": {
# "IpOwnerId": "693054447076",
# "PublicDnsName": "ec2-52-88-47-177.us-west-2.compute.amazonaws.com",
# "PublicIp": "52.88.47.177"
# },
# "Primary": true,
# "PrivateDnsName": "ip-10-5-1-205.us-west-2.compute.internal",
# "PrivateIpAddress": "10.5.1.205"
# },
# --------------------------------------------------------------------------------
associations = [
association for interface in interfaces
for association in interface['PrivateIpAddresses']
]
# --------------------------------------------------------------------------------
# Get IP from each IP association
# --------------------------------------------------------------------------------
private_ips = [
association['PrivateIpAddress'] for association in associations
]
return private_ips
def load_json():
data = json.load(sys.stdin)
return data
def main():
data = load_json()
"""
print(data['aws_region'])
print(data['aws_vpc_id'])
print(data['aws_nlb_name'])
"""
ips = get_nlb_private_ips(data)
print(json.dumps({"private_ips": json.dumps(ips)}))
if __name__ == '__main__':
main()
Using aws_network_interfaces datasource
After aws_lb has been created.
data "aws_network_interfaces" "this" {
filter {
name = "description"
values = ["ELB net/${aws_lb.this.name}/*"]
}
filter {
name = "vpc-id"
values = ["${var.vpc_id}"]
}
filter {
name = "status"
values = ["in-use"]
}
filter {
name = "attachment.status"
values = ["attached"]
}
}
locals {
nlb_interface_ids = "${flatten(["${data.aws_network_interfaces.this.ids}"])}"
}
data "aws_network_interface" "ifs" {
count = "${length(local.nlb_interface_ids)}"
id = "${local.nlb_interface_ids[count.index]}"
}
output "aws_lb_network_interface_ips" {
value = "${flatten([data.aws_network_interface.ifs.*.private_ips])}"
}
i would suggest to use the data "dns_a_record_set" to get the IPs:
data "dns_a_record_set" "nlb_ips" {
host = aws_lb.<your_alb>.dns_name
}
You can find the documentation under https://registry.terraform.io/providers/hashicorp/dns/latest/docs/data-sources/dns_a_record_set