Currently, I am using terraform workspace to deploy the same code into multiple environments. But in right now I am stuck in referring resource in a specific workspace.
example of code
resource "aws_security_group" "testing-ec2" {
name = "${local.env}-testing-ec2"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
security_groups = ["${local.security-groups}"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${local.bastion_ip}"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
}
workspace security group
local {
tf_security-groups = {
dev = ""
stg = "${aws_security_group.test-private-alb.id}"
qa = "${aws_security_group.test1-private-alb.id}"
prod = "${aws_security_group.test2-private-alb.id}"
}
security-groups = "${lookup(local.tf_security-groups,local.env)}"
}
when I am trying to apply into stg workspace this error appears
* local.tf_security-groups: local.tf_security-groups: Resource 'aws_security_group.test1-private-alb' not found for variable 'aws_security_group.test1-private-alb.id'
You could use the data source terraform_remote_state to sift through the state but you'd also have to make each of your security group ids into outputs.
data "terraform_remote_state" "this" {
backend = "s3"
workspace = "stg"
config {
bucket = ""
key = ""
region = ""
}
}
It would be cleaner to use the aws_security_group data source instead.
locals {
env = "qa"
security_group_map = {
stg = data.aws_security_group.test_private_alb.id
qa = data.aws_security_group.test1_private_alb.id
prod = data.aws_security_group.test2_private_alb.id
}
security_groups = lookup(local.security_group_map, local.env, "")
}
data "aws_security_group" "test_private_alb" {
name = "test_private_alb"
}
data "aws_security_group" "test1_private_alb" {
name = "test1_private_alb"
}
data "aws_security_group" "test2_private_alb" {
name = "test2_private_alb"
}
Related
I am creating a security group that has some standard ingress rules. I also want to add additional ingress rules based on a variable.
variable "additional_ingress" {
type = list(object({
protocol = string
from_port = string
to_port = string
cidr_blocks = list(string)
}))
default = []
}
resource "aws_security_group" "ec2" {
name = "my-sg"
description = "SG for ec2"
vpc_id = data.aws_vpc.this.id
egress {
to_port = 0
from_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["10.0.0.0/8"]
}
# rdp
ingress {
protocol = "tcp"
from_port = 3389
to_port = 3389
cidr_blocks = ["10.0.0.0/8"]
}
# additional ingress rules
ingress {
for_each = var.additional_ingress
protocol = each.value.protocol
from_port = each.value.from_port
to_port = each.value.to_port
cidr_blocks = each.value.cidr_blocks
}
}
I am getting error
A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the
value in its "for_each" expression. │ Remove this reference to
each.value in your configuration to work around this error.
How do I add ingress rules based on variable
This is most easily managed with the aws_security_group_rule resource and the for_each meta-argument:
resource "aws_security_group_rule" "ec2" {
for_each = var.additional_ingress
type = each.value.type
from_port = each.value.from_port
to_port = each.value.to_port
protocol = each.value.protocol
cidr_blocks = each.value.cidr_blocks
security_group_id = aws_security_group.ec2.id
}
Note that the variable declaration for additional_ingress is missing the type key in its object constructor definition, so that would need to be added:
variable "additional_ingress" {
type = list(object({
type = string
...
}))
default = []
}
You can use dynamic blocks like this:
dynamic "ingress" {
for_each = var.additional_ingress
protocol = ingress.value.protocol
from_port = ingress.value.from_port
to_port = ingress.value.to_port
cidr_blocks = ingress.value.cidr_blocks
}
Provided the additional_ingress is an object (map) of ingress entries.
Of course, I would advise to use aws_security_group_rule resource, but if this is already live project, I understand if you'd want to stay with inline rules. Have fun!
I have a terraform code that create an EC2 type Batch job , and my aws batch job download some data a total of 50GB ,how to add that storage space to my instance in terrafrom ? and if there is another way to add that storage
This is my terrafrom code
resource "aws_batch_compute_environment" "pipeline" {
compute_environment_name = "${var.product}-${var.application}-pipeline-batch-compute-environment-${var.env}"
compute_resources {
instance_role = aws_iam_instance_profile.pipeline_batch.arn
instance_type = var.pipeline_instance_type
max_vcpus = var.pipeline_max_vcpus
min_vcpus = 0
security_group_ids = [
aws_security_group.pipeline_batch.id
]
subnets = var.subnets
type = "EC2"
}
service_role = aws_iam_role.pipeline_batch_service_role.arn
type = "MANAGED"
tags = {
environment = var.env
}
}
resource "aws_batch_job_queue" "pipeline" {
depends_on = [aws_batch_compute_environment.pipeline]
name = "${var.product}-${var.application}-pipeline-batch-job-queue-${var.env}"
state = "ENABLED"
priority = 1
compute_environments = [
aws_batch_compute_environment.pipeline.arn
]
tags = {
environment = var.env
}
}
resource "aws_batch_job_definition" "pipeline" {
depends_on = [aws_ecr_repository.pipeline]
name = "${var.product}-${var.application}-pipeline-batch-job-definition-${var.env}"
type = "container"
container_properties = <<CONTAINER_PROPERTIES
{
"image": "${aws_ecr_repository.pipeline.repository_url}:latest",
"command": [ "--s3_bucket", "${var.input_bucket}", "--s3_upload_bucket", "${var.upload_bucket}"],
"executionRoleArn": "${aws_iam_role.pipeline_batch_instance_role.arn}",
"memory": ${var.pipeline_memory},
"vcpus": ${var.pipeline_vcpus}
}
CONTAINER_PROPERTIES
tags = {
environment = var.env
}
}
If you want you may be able to mount a shared EFS drive you could try something like this. Keep in mind I have not tested this & you will need to replace certain parameters with your subnet-ids, vpc-id, etc:
resource "aws_batch_compute_environment" "pipeline" {
compute_environment_name = "${var.product}-${var.application}-pipeline-batch-compute-environment-${var.env}"
compute_resources {
instance_role = aws_iam_instance_profile.pipeline_batch.arn
instance_type = var.pipeline_instance_type
max_vcpus = var.pipeline_max_vcpus
min_vcpus = 0
security_group_ids = [
aws_security_group.pipeline_batch.id
]
subnets = var.subnets
type = "EC2"
}
service_role = aws_iam_role.pipeline_batch_service_role.arn
type = "MANAGED"
tags = {
environment = var.env
}
}
resource "aws_batch_job_queue" "pipeline" {
depends_on = [aws_batch_compute_environment.pipeline]
name = "${var.product}-${var.application}-pipeline-batch-job-queue-${var.env}"
state = "ENABLED"
priority = 1
compute_environments = [
aws_batch_compute_environment.pipeline.arn
]
tags = {
environment = var.env
}
}
resource "aws_batch_job_definition" "pipeline" {
depends_on = [aws_ecr_repository.pipeline]
name = "${var.product}-${var.application}-pipeline-batch-job-definition-${var.env}"
type = "container"
container_properties = <<CONTAINER_PROPERTIES
{
"image": "${aws_ecr_repository.pipeline.repository_url}:latest",
"command": [ "--s3_bucket", "${var.input_bucket}", "--s3_upload_bucket", "${var.upload_bucket}"],
"executionRoleArn": "${aws_iam_role.pipeline_batch_instance_role.arn}",
"memory": ${var.pipeline_memory},
"vcpus": ${var.pipeline_vcpus},
"mountPoints": [
{
readOnly = null,
containerPath = "/var/batch"
sourceVolume = "YOUR-FILE-SYSTEM-NAME"
}
]
}
CONTAINER_PROPERTIES
tags = {
environment = var.env
}
}
resource "aws_efs_file_system" "general" {
creation_token = "YOUR-FILE-SYSTEM-NAME"
#kms_key_id = module.kms.arn
#encrypted = true
encrypted = false
performance_mode = "generalPurpose"
throughput_mode = "provisioned"
provisioned_throughput_in_mibps = 8
tags = {Name = "YOUR-FILE-SYSTEM-NAME"}
}
resource "aws_efs_access_point" "general" {
tags = {Name = "YOUR-FILE-SYSTEM-NAME"}
file_system_id = aws_efs_file_system.general.id
root_directory {
path = "/YOUR-FILE-SYSTEM-NAME"
creation_info {
owner_gid = "1000"
owner_uid = "1000"
permissions = "755"
}
}
posix_user {
uid = "1000"
gid = "1000"
}
}
## FOR REDUNDANCY
## It is a good idea to add a mount target per AZ you use
resource "aws_efs_mount_target" "a" {
source = "app.terraform.io/popreach/efs-mount-target/aws"
version = "1.0.0"
file_system_id = aws_efs_file_system.general.id
subnet_id = PUBLIC-SUBNET-A
security_groups = [aws_security_group.general.id]
}
resource "aws_efs_mount_target" "b" {
file_system_id = aws_efs_file_system.general.id
subnet_id = PUBLIC-SUBNET-B
security_groups = [aws_security_group.general.id]
}
resource "aws_security_group" "general" {
name = YOUR-SECURITY-GROUP-NAME
vpc_id = YOUR-VPC-ID
tags = {Name = YOUR-SECURITY-GROUP-NAME}
}
resource "aws_security_group_rule" "ingress" {
type = "ingress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
from_port = "2049"
to_port = "2049"
protocol = "tcp"
security_group_id = aws_security_group.general.id
}
resource "aws_security_group_rule" "egress" {
type = "egress"
description = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
from_port = "0"
to_port = "0"
protocol = "all"
security_group_id = aws_security_group.general.id
}
You'll be able to mount your EFS drive on any EC2 Default AMZN Linux Instance like this: mkdir /data/efs && mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport REPLACE_WITH_EFS_DNS:/ /data/efs
I have written a terraform configuration file for a bastion entry point on an application.
ami = var.ami
ebs_optimized = var.ebs_optimized
iam_instance_profile = aws_iam_instance_profile.iam_instance_profile
instance_type = var.instance_type
key_name = "quadops"
subnet_id = var.subnet_id
user_data = var.user_data
tags = {
Name = "${var.name}"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
volume_tags = {
Name = "${var.name}"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
vpc_security_group_ids = [aws_security_group.security_group.id]
}
resource "aws_security_group" "security_group" {
name = "${var.name}-security-group"
vpc_id = var.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.name}"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
}
resource "aws_iam_instance_profile" "iam_instance_profile" {
name = "${var.name}-iam-instance-profile"
role = aws_iam_role.iam_role
tags = {
Name = "${var.name}"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
}
resource "aws_iam_role" "iam_role" {
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
name = "${var.name}-iam-role"
tags = {
Name = "${var.name}-iam-role"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
}
resource "aws_eip" "eip" {
vpc = true
instance = aws_instance.instance.id
tags = {
Name = "${var.name}-eip"
Business = "Infrastracture"
app_name = "infra"
app_env = "${var.env}"
}
}
resource "cloudflare_record" "record" {
zone_id = var.zone_id
name = "bastion.${var.env}"
type = "A"
value = "aws_eip.eip.public_ip"
}
Upon running plan, i'm getting this error.
on .terraform/modules/bastion/main.tf line 49, in resource "aws_iam_instance_profile" "iam_instance_profile":
49: role = aws_iam_role.iam_role
|----------------
| aws_iam_role.iam_role is object with 15 attributes
Inappropriate value for attribute "role": string required.
I can't seem to get over this hurdle as I think i'm calling the resource correctly but terraform12 says that it requires a string am I passing the values incorrectly? Thanks.
You are passing the entire aws_iam_role object to the role argument which is causing the error. Instead, try passing the name of the role like so:
resource "aws_iam_instance_profile" "iam_instance_profile" {
role = aws_iam_role.iam_role.name
}
My requirement is I need to get the CIDR address for vpc-foo and vpc-bar and pass on to the resource "aws_security_group_rule" "ingress"
I tried with the below code:
data "aws_vpcs" -> Get the ID for a given VPC
data "aws_vpc" -> make a list with VPC ids
resource "aws_security_group_rule" "ingress" -> pass VPC CIRDs as an ingress
variable "list_of_vps"{
type = "list"
default = ["vpc-foo", "vpc-bar"]
}
variable "sg_name" {
default = "sg-test"
}
data "aws_vpcs" "get_vpc"{
count = "$length(var.list_of_vps)"
filter {
name = "tag:Name"
values = ["vpc-${element(var.list_of_vps, count.index)}"]
}
}
data "aws_vpc" "get_vpc_ids" {
count = "${length(data.aws_vpcs.get_vpc.ids)}"
id = "${tolist(data.aws_vpcs.prod.ids)[count.index]}"
}
resource "aws_security_group_rule" "ingress" {
count = "${length(var.list_of_vps)}"
type = "ingress"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["${element(data.aws_vpc.get_vpc_ids.*.cidr_block, count.index)}"]
security_group_id = "${var.sg_name}
}
Can someone help with this, please?
I'm receiving this curious error message
PlatformTaskDefinitionIncompatibilityException: The specified platform does not satisfy the task definition’s required capabilities
I suspect it's something to do with this line although not quite sure
file_system_id = aws_efs_file_system.main.id
This is my script:
provider "aws" {
region = "us-east-1"
profile = var.profile
}
### Network
# Fetch AZs in the current region
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "172.17.0.0/16"
}
# Create var.az_count private subnets, each in a different AZ
resource "aws_subnet" "private" {
count = "${var.az_count}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
vpc_id = "${aws_vpc.main.id}"
}
# Create var.az_count public subnets, each in a different AZ
resource "aws_subnet" "public" {
count = "${var.az_count}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, var.az_count + count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
vpc_id = "${aws_vpc.main.id}"
map_public_ip_on_launch = true
}
# IGW for the public subnet
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.main.id}"
}
# Route the public subnet traffic through the IGW
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.main.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
# Create a NAT gateway with an EIP for each private subnet to get internet connectivity
resource "aws_eip" "gw" {
count = "${var.az_count}"
vpc = true
depends_on = ["aws_internet_gateway.gw"]
}
resource "aws_nat_gateway" "gw" {
count = "${var.az_count}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
allocation_id = "${element(aws_eip.gw.*.id, count.index)}"
}
# Create a new route table for the private subnets
# And make it route non-local traffic through the NAT gateway to the internet
resource "aws_route_table" "private" {
count = "${var.az_count}"
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.gw.*.id, count.index)}"
}
}
# Explicitely associate the newly created route tables to the private subnets (so they don't default to the main route table)
resource "aws_route_table_association" "private" {
count = "${var.az_count}"
subnet_id = "${element(aws_subnet.private.*.id, count.index)}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
}
### Security
# ALB Security group
# This is the group you need to edit if you want to restrict access to your application
resource "aws_security_group" "lb" {
name = "tf-ecs-alb"
description = "controls access to the ALB"
vpc_id = "${aws_vpc.main.id}"
ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Traffic to the ECS Cluster should only come from the ALB
resource "aws_security_group" "ecs_tasks" {
name = "tf-ecs-tasks"
description = "allow inbound access from the ALB only"
vpc_id = "${aws_vpc.main.id}"
ingress {
protocol = "tcp"
from_port = "${var.app_port}"
to_port = "${var.app_port}"
security_groups = ["${aws_security_group.lb.id}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
### ALB
resource "aws_alb" "main" {
name = "tf-ecs-chat"
subnets = aws_subnet.public.*.id
security_groups = ["${aws_security_group.lb.id}"]
}
resource "aws_alb_target_group" "app" {
name = "tf-ecs-chat"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.main.id}"
target_type = "ip"
}
# Redirect all traffic from the ALB to the target group
resource "aws_alb_listener" "front_end" {
load_balancer_arn = "${aws_alb.main.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.app.id}"
type = "forward"
}
}
### ECS
resource "aws_ecs_cluster" "main" {
name = "tf-ecs-cluster"
}
resource "aws_ecs_task_definition" "app" {
family = "app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "${var.fargate_cpu}"
memory = "${var.fargate_memory}"
task_role_arn = "${aws_iam_role.ecs_task_role_role.arn}"
execution_role_arn = "${aws_iam_role.ecs_task_role_role.arn}"
container_definitions = <<DEFINITION
[
{
"cpu": ${var.fargate_cpu},
"image": "${var.app_image}",
"memory": ${var.fargate_memory},
"name": "app",
"networkMode": "awsvpc",
"portMappings": [
{
"containerPort": ${var.app_port},
"hostPort": ${var.app_port}
}
]
}
]
DEFINITION
volume {
name = "efs-html"
efs_volume_configuration {
file_system_id = aws_efs_file_system.main.id
root_directory = "/opt/data"
}
}
}
resource "aws_ecs_service" "main" {
name = "tf-ecs-service"
cluster = "${aws_ecs_cluster.main.id}"
task_definition = "${aws_ecs_task_definition.app.arn}"
desired_count = "${var.app_count}"
launch_type = "FARGATE"
network_configuration {
security_groups = ["${aws_security_group.ecs_tasks.id}"]
subnets = aws_subnet.private.*.id
}
load_balancer {
target_group_arn = "${aws_alb_target_group.app.id}"
container_name = "app"
container_port = "${var.app_port}"
}
depends_on = [
"aws_alb_listener.front_end",
]
}
# ECS roles & policies
# Create the IAM task role for ECS Task definition
resource "aws_iam_role" "ecs_task_role_role" {
name = "test-ecs-task-role"
assume_role_policy = "${file("ecs-task-role.json")}"
tags = {
Terraform = "true"
}
}
# Create the AmazonECSTaskExecutionRolePolicy managed role
resource "aws_iam_policy" "ecs_task_role_policy" {
name = "test-ecs-AmazonECSTaskExecutionRolePolicy"
description = "Provides access to other AWS service resources that are required to run Amazon ECS tasks"
policy = "${file("ecs-task-policy.json")}"
}
# Assign the AmazonECSTaskExecutionRolePolicy managed role to ECS
resource "aws_iam_role_policy_attachment" "ecs_task_policy_attachment" {
role = "${aws_iam_role.ecs_task_role_role.name}"
policy_arn = "${aws_iam_policy.ecs_task_role_policy.arn}"
}
resource "aws_efs_file_system" "main" {
tags = {
Name = "ECS-EFS-FS"
}
}
resource "aws_efs_mount_target" "main" {
count = "${var.subnets-count}"
file_system_id = "${aws_efs_file_system.main.id}"
subnet_id = "${element(var.subnets, count.index)}"
}
variables.tf
variable "az_count" {
description = "Number of AZs to cover in a given AWS region"
default = "2"
}
variable "app_image" {
description = "Docker image to run in the ECS cluster"
default = "xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test1:nginx"
}
variable "app_port" {
description = "Port exposed by the docker image to redirect traffic to"
# default = 3000
default = 80
}
variable "app_count" {
description = "Number of docker containers to run"
default = 2
}
variable "fargate_cpu" {
description = "Fargate instance CPU units to provision (1 vCPU = 1024 CPU units)"
default = "256"
}
variable "fargate_memory" {
description = "Fargate instance memory to provision (in MiB)"
default = "512"
}
################
variable "subnets" {
type = "list"
description = "list of subnets to mount the fs to"
default = ["subnet-xxxxxxx","subnet-xxxxxxx"]
}
variable "subnets-count" {
type = "string"
description = "number of subnets to mount to"
default = 2
}
You simply require to upgrade your ecs service to latest version
resource "aws_ecs_service" "service" {
platform_version = "1.4.0"
launch_type = "FARGATE"
...
}
efs feature is only available on the latest version
When you don’t specify platform_version, it will default to LATEST which is set to 1.3.0 which doesn’t allow efs volumes.
UPDATE: As of 1/21/22, it seems that the LATEST ECS service version is 1.4.0, so explicitly specifying the ECS platform version is no longer necessary to have EFS mounts work. Per:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform-linux-fargate.html