aws_directory_service_directory in terraform leaves SG wide-open - terraform

I've built an AD directory with Terraform in AWS but SecurityHub recently pointed out that the SG it created has a bunch of ports wide open with 0.0.0.0/0. Thankfully, I have it in a VPC for internal subnets only, but this is definitely not a great practice and I'd rather set the SG inbound CIDRs to my local VPC network range. Is that possible to change? I don't see a way to get to the SG, other than get its ID.
This is how I created it:
resource "aws_directory_service_directory" "ad" {
name = local.ad_hostname
short_name = "CORP"
password = random_password.ad_admin_password.result
edition = "Standard"
type = "MicrosoftAD"
vpc_settings {
vpc_id = local.vpc_id
subnet_ids = slice(local.pvt_subnets, 0, 2)
}
}

Related

What does `cluster_security_group_id` do in the Terraform EKS module?

From the documentation,
Description: Existing security group ID to be attached to the cluster. Required if create_cluster_security_group = false
I have set create_cluster_security_group = false and assigned a security group (let's call it data-sg) I've created elsewhere to the cluster_security_group_id input. This security group is also attached to a couple RDS instances, and data-sg has a rule allowing all ingress within itself:
module "data_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "4.9.0"
name = "data-sg"
vpc_id = module.vpc.vpc_id
ingress_with_self = [
{
description = "All ingress within data-sg"
rule = "all-all"
self = true
}
]
...
}
Yet, pods in my EKS cluster cannot reach the RDS instances.
It seems that the pods are not within the data-sg security group, which begs the first question:
What is cluster_security_group_id actually used for?
Assuming this security group isn't actually being used in the way that I thought, it begs a second question:
What security group do I need to allow RDS-type ingress from in the data-sg security group?
Seems like there's an output from the EKS module called cluster_primary_security_group_id, but adding this
ingress_with_source_security_group_id = [
{
description = "RDS ingress from EKS cluster"
from_port = 5432
to_port = 5432
source_security_group_id = var.cluster_primary_security_group_id
}
]
to the module "aws_sg" { ... above also does not allow me to connect to RDS from EKS pods.

terraform - how to make aws_subnet resource take next available cidr_block?

I am using this code:
resource "aws_subnet" "subnet" {
vpc_id = var.vpc_id
cidr_block = var.subnet_cidr_block
availability_zone = var.availability_zone
}
And in variables.tf:
variable "subnet_cidr_block" {
type = string
default = "<some ip address>/<some number>"
description = "the block for your subnet, please check on AWS which address is available in the VPC"
}
Since this is the default VPC there are several subsets inside it already, so every time when I use this code I need to log into AWS and check what is the next available address.
Is there any way that transformer can pick up the next available address and take it? any function?
Thank you.
Terraform can only work with what the underlying AWS API provides, and as far as I know there is no mechanism in EC2 for explicitly tracking allocations of subnets in a way that allows you to request to be assigned a new one without deciding ahead of time what address it will have. If the EC2 API had such a feature then in principle you could omit cidr_block entirely when declaring a subnet and have the remote system determine automatically which range to use, in a similar manner to what happens if you declare an aws_instance without specifying a specific private_ip. However, there is no such feature in the API.
With Terraform itself, the best we can do is to make a single Terraform configuration that itself defines the whole address allocation scheme and then use that resulting data structure to populate all of the aws_subnet blocks. That does require you to centrally manage the address space all in one place, because otherwise there is nothing to keep track of what has already been allocated and what has not.
Because this is a common situation, HashiCorp offers a utility module hashicorp/subnets/cidr which takes a list of all of your subnet allocations and encapsulates the calculations needed to produce the CIDR block for each one:
module "subnet_addrs" {
source = "hashicorp/subnets/cidr"
base_cidr_block = "10.0.0.0/16"
networks = [
{
name = "us-west-2a"
new_bits = 8
},
{
name = "us-west-2b"
new_bits = 8
},
]
}
resource "aws_vpc" "example" {
cidr_block = module.subnet_addrs.base_cidr_block
}
resource "aws_subnet" "example" {
for_each = module.subnet_addrs.network_cidr_blocks
vpc_id = aws_vpc.example.id
availability_zone = each.key
cidr_block = each.value
}
The README for the module explains what rules you need to abide by in order to change the address space allocations over time without disrupting existing networks, so if you do choose to use this module be sure to read its documentation in addition to my example above (which I copied from the README).

Terraform: creation of resources with nested loop

I have created a bunch of VPC endpoints in one AWS account and need to share them with other VPCs across different accounts. Break down of my terraform code is provided below.
Created the Route53 private hosted zones (Resource#1 for easy reference) as below
resource "aws_route53_zone" "private" {
for_each = local.vpc_endpoints
name = each.value.phz
vpc {
vpc_id = var.vpc_id
}
lifecycle {
ignore_changes = [vpc]
}
tags = {
Name = each.key
}
}
Created vpc association authorizations (Resource#2). The VPC ID, to which the endpoints are to be shared is passed as a variable as shown below.
resource "aws_route53_vpc_association_authorization" "example" {
for_each = local.vpc_endpoints
vpc_id = var.vpc
zone_id = aws_route53_zone.private[each.key].zone_id
}
Finally, I have created the VPC associations (Resource#3)
resource "aws_route53_zone_association" "target" {
for_each = aws_route53_vpc_association_authorization.example
provider = aws.target-account
vpc_id = aws_route53_vpc_association_authorization.example[each.key].vpc_id
zone_id = aws_route53_vpc_association_authorization.example[each.key].zone_id
}
Everything works fine for the first VPC (vpc-A). But now I need to share the same hosted zones (Resource#1) with a different VPC (vpc-B) and more VPCs in the future. This means I need to repeat the creation of "aws_route53_vpc_association_authorization" (Resource#2) for all new VPCs as well, ideally looping through a list of VPC IDs.
But I am unable to it as nested for_each loop is not supported. Tried other options like count + for_each etc., but nothing help.
Could you provide some guidance on how to achieve this?

How to create subnets inside virtual network and security rules inside nsg using loop concept in terraform

I’m trying to create network security group with multiple security rules in one script and virtual network along with five subnets in one script.
For that, I have referred azurerm_virtual_network and azurerm_subnet_network_security_group_association documentations.
The above documentation contains the code with hardcode values. But I want to use loop concept to create subnets inside virtual network, security rules inside network security group and then associate each subnet with network security group.
Thanks in advance for the help !
In order to "loop" you can use the for_each = var.value method and instead of placing the values within the Main.tf file, you can instead use a .tfvars file to loop through the # of resources.
As this is quite advanced, you would be better off dissecting/reusing something that's already available. Take a look at the Azurerm subnet modules from Claranet, available at the modules page on the Terraform website (and there are a ton more to explore!). Here's how you would define the nsgs, vnet and subnets in the locals, at a glance:
locals {
network_security_group_names = ["nsg1", "nsg2", "nsg3"]
vnet_cidr = "10.0.1.0/24"
subnets = [
{
name = "subnet1"
cidr = ["10.0.1.0/26"]
service_endpoints = ["Microsoft.Storage", "Microsoft.KeyVault", "Microsoft.ServiceBus", "Microsoft.Web"]
nsg_name = local.network_security_group_names[0]
vnet_name = module.azure-network-vnet.virtual_network_name
},
{
name = "subnet2"
cidr = ["10.0.1.64/26"]
service_endpoints = ["Microsoft.Storage", "Microsoft.KeyVault", "Microsoft.ServiceBus", "Microsoft.Web"]
nsg_name = local.network_security_group_names[2]
vnet_name = module.azure-network-vnet.virtual_network_name
}
]
}

Is there a way to merge terraform variables to use same module across multiple AWS regions?

I'm brand new to terraform, and I'm utilizing terragrunt to help me get things rolling. I have a decent amount of infrastructure to migrate and get set up w/ terraform, but I'm getting my feet underneath me first. We have multiple VPC's in different regions with a lot of the same security group rules used i.e.(web, db, etc..) that I want to replicate across each region.
I have a simple example of how I currently have an EC2 module setup to recreate the security group rules and was wondering if there's a better way to organize this code so I don't have to create a new module for the same SG rule for each region? i.e. some smart way to utilize lists for my vpc's, providers, etc...
since this is just one SG rule across two regions, I'm trying to avoid this growing ugly as we scale up to even more regions and I input multiple SG rules
My state is currently being stored in S3 and in this setup I pull the state so I can access the VPC outputs from another module I used to create the VPC's
terraform {
backend "s3" {}
}
provider "aws" {
version = "~> 1.31.0"
region = "${var.region}"
profile = "${var.profile}"
}
provider "aws" {
version = "~> 1.31.0"
alias = "us-west-1"
region = "us-west-1"
profile = "${var.profile}"
}
#################################
# Data sources to get VPC details
#################################
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "${var.vpc_remote_state_bucket}"
key = "${var.vpc_remote_state_key}"
region = "${var.region}"
profile = "${var.profile}"
}
}
#####################
# Security group rule
#####################
module "east1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_east_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_east_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
# List of maps
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
module "west1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Your current setup uses two times the same module differing in the provider. You can pass down multiple providers to the module (see the documentation). Then, within the module, you can use the same variables you specified once in your main document to create all the instances you need.
However, since you are using one separate provider for each resource type, you have to have at least some code duplication down the line.
Your code could then look something like this
module "vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws.main = "aws"
aws.secondary = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Inside your module you can then use the main and secondary provider to deploy all your required resources.

Resources