I’m getting the error below, from command AWS_PROFILE=myprofile AWS_REGION=sa-east-1 terraform apply -target=module.saopaulo_service_dev_kubernetes.
Error authorizing security group rule type ingress: InvalidGroup.NotFound: The security group ‘sg-something’ does not exist
The target I'm applying is as below.
module "saopaulo_service_dev_kubernetes" {
source = "./modules/regional-kubernetes"
region_code = "saopaulo"
vpc_name = "main"
env = "dev"
cluster_prefix = "service"
instance_type = "m5.2xlarge"
providers = {
aws = aws.saopaulo
}
}
The source file is as below. I didn't add all the files, as there are too many, but just attached the eks module (terraform-aws-modules/eks/aws) I use to create my module.
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0" # Version Pinning
cluster_name = local.cluster_name
cluster_version = local.cluster_version
vpc_id = local.vpc_id
subnets = local.private_subnets
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
worker_additional_security_group_ids = [aws_security_group.nodeport.id, data.aws_security_group.common_eks_sg.id]
wait_for_cluster_cmd = "for i in `seq 1 60`; do curl -k -s $ENDPOINT/healthz >/dev/null && exit 0 || true; sleep 5; done; echo TIMEOUT && exit 1"
worker_groups = concat([{
instance_type = "t3.micro"
asg_min_size = "1"
asg_max_size = var.asg_max_size
key_name = "shared-backdoor"
kubelet_extra_args = join(" ", [
"--node-labels=app=nodeport",
"--register-with-taints=dedicated=nodeport:NoSchedule"
])
pre_userdata = file("${path.module}/pre_userdata.sh")
tags = concat([for k, v in local.common_tags : {
key = k
value = v
propagate_at_launch = "true"
}], [{
key = "Role"
value = "nodeport"
propagate_at_launch = "true"
}])
}], local.worker_group)
map_users = local.allow_user
# map_roles = local.allow_roles[var.env]
}
I have security group named sg-something in sa-east-1 region, and have also checked that I’m running terraform apply on correct region by checking
data "aws_region" "current" {}
output my_region {
value = data.aws_region.current.name
}
Any suggestions?
Related
See structure of the terraform for context
################### terraform.hcl
InstanceCFG = [
{
"Name" = "instance_0001"
"Alias" = "cookie"
},
{
"Name" = "instance_0002"
"Alias" = "cake"
},
{
"Name" = "instance_0003"
"Alias" = "cupcake"
},
{
"Name" = "instance_0004"
"Alias" = "chocolate"
},
{
"Name" = "instance_0005"
"Alias" = "icecream"
}
]
NLBCFG = [
{
"Name" = "8000-tg"
"Port" = "8000"
"Protocol" = "TCP"
},
{
"Name" = "9000-tg"
"Port" = "9000"
"Protocol" = "TCP"
},
]
################### loadbalancer.tf
resource "aws_lb_target_group" "tg" {
count = length(var.NLBConfig)
name = "${var.NLBConfig[count.index]["Name"]}"
port = "${var.NLBConfig[count.index]["Port"]}"
protocol = "${var.NLBConfig[count.index]["Protocol"]}"
vpc_id = var.VPCID
}
resource "aws_lb_listener" "lb_listener" {
count = length(var.NLBConfig)
load_balancer_arn = aws_lb.lb.arn
port = "${var.NLBConfig[count.index]["Port"]}"
protocol = "${var.NLBConfig[count.index]["Protocol"]}"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.tg[count.index].arn
}
}
################### autoscalinggroup.tf
# AWS AutoScaling Group
resource "aws_autoscaling_group" "asg" {
count = length(var.ClientConfig)
name = "${var.ClientConfig[count.index]["Name"]}_${local.name_postfix}"
min_size = 1
max_size = 1
desired_capacity = 1
force_delete = true
termination_policies = [ OldestLaunchConfiguration ]
health_check_type = "EC2" # TODO: Change to ELB for Healthchecks
health_check_grace_period = 300
target_group_arns = [ aws_lb_target_group.tg.arn, aws_lb_target_group.tg_wss.arn ] # < Issue HERE!
}
I am able to create two groups of resources separately, 5 ASGs and 3 Target Groups, these numbers will change constantly but I need to make sure X number of ASG instances can join X number of target groups independently of one another, is this even possible?
I find the answer as follow:
Note I am using two count statements in two different resources.
################### loadbalancer.tf
# Consolidate Target Groups ARNs for ASG
data "aws_lb_target_group" "data_tg" {
depends_on = [ aws_lb_target_group.tg ]
count = length(var.NLBCFG)
arn = aws_lb_target_group.tg[count.index].arn
}
################### autoscalinggroup.tf
# AWS AutoScaling Group
resource "aws_autoscaling_group" "asg" {
count = length(var.ClientConfig)
name = "${var.ClientConfig[count.index]["Name"]}_${local.name_postfix}"
min_size = 1
max_size = 1
desired_capacity = 1
force_delete = true
termination_policies = [ OldestLaunchConfiguration ]
health_check_type = "EC2" # TODO: Change to ELB for Healthchecks
health_check_grace_period = 300
target_group_arns = [ data.aws_lb_target_group.data_tg[*].arn ] # < Issue HERE!
}
I receive 2 errors when i deploy AWS EKS module via Terraform. How to solve it?
Error: unexpected EKS Add-On (my-cluster:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)
Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connect: connection refused
What role should i write in aws_auth_roles parameter? AWSServiceRoleForAmazonEKS or a custom role with policies: AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy?
What role should i add to instance-profile? AWSServiceRoleForAmazonEKS or a custom role with policies: AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy?
Terraform deploys EC2 machines for worker node, but i dont see a nodegroup with worker nodes in eks, probably coredns issue is here.
My config:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 18.20.2"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
subnet_ids = ["...","..."]
self_managed_node_group_defaults = {
instance_type = "t2.micro"
update_launch_template_default_version = true
}
self_managed_node_groups = {
one = {
name = "test-1"
max_size = 2
desired_size = 1
use_mixed_instances_policy = true
mixed_instances_policy = {
instances_distribution = {
on_demand_base_capacity = 0
on_demand_percentage_above_base_capacity = 10
spot_allocation_strategy = "capacity-optimized"
}
}
}
}
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
aws_auth_users = [
{
userarn = "arn:aws:iam::...:user/..."
username = "..."
groups = ["system:masters"]
}
]
aws_auth_roles = [
{
rolearn = "arn:aws:iam::...:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"
username = "AWSServiceRoleForAmazonEKS"
groups = ["system:masters"]
}
]
aws_auth_accounts = [
"..."
]
}
I have a resource task which gives the next output:
aws_eks_node_group.managed_workers["es"]:
resource "aws_eks_node_group" "managed_workers" {
ami_type = "AL2_x86_64"
arn = "arn:aws:eks:eu-west-1:xxxxx:nodegroup/EKS/EKS_-nodegroup-CI-es/b2be06b7-e5fe-b346-0e29-ec3f459f7b2c"
capacity_type = "ON_DEMAND"
cluster_name = "EKS_CLuster"
disk_size = 20
id = "EKS:EKS_-API-nodegroup-CI-es"
instance_types = [
"m5.xlarge",
]
labels = {
"autoscalergroup" = "pool"
"lifecycle" = "OnDemand"
}
node_group_name = "worker-node-nodegroup-1"
node_role_arn = "arn:aws:iam::xxxxx:role/EKS_workernode"
release_version = "1.18.9-20210722"
resources = [
{
autoscaling_groups = [
{
name = "eks-xxx-xxx-xx"
},
]
remote_access_security_group_id = "sg-xxxx"
},
]
I'm trying to use the autoscaling_groups.name on this way:
resource "aws_autoscaling_group_tag" "nodetags" {
for_each = aws_eks_node_group.managed_workers
autoscaling_group_name = each.value.resources.autoscaling_groups.name
But I'm not able to access to resources.autoscaling_groups.name with success.. Someone know how to access to this data?
Thanks
resources and autoscaling_groups are both lists.
Use each.value.resources[0].autoscaling_groups[0].name
I ran terraform import for one SQL server & one SQL database. While running the terraform plan I see message 2 to change. But I am not able to find the change in the below plan. It's not showing any null value.
I am not sure what is the change to be in effect.
Here is the information about the terraform plan:
# azurerm_sql_database.sqldb[0m will be updated in-place[0m[0m
2020-12-24T16:01:39.1426150Z [0m [33m~[0m[0m resource "azurerm_sql_database" "sqldb" {
2020-12-24T16:01:39.1426881Z [1m[0mcollation[0m[0m = "SQL_Latin1_General_CP1_CI_AS"
2020-12-24T16:01:39.1427865Z [32m+[0m [0m[1m[0mcreate_mode[0m[0m = "Default"
2020-12-24T16:01:39.1428801Z [1m[0mcreation_date[0m[0m = "2020-07-06T15:20:16.947Z"
2020-12-24T16:01:39.1429581Z [1m[0mdefault_secondary_location[0m[0m = "East US"
2020-12-24T16:01:39.1430271Z [1m[0medition[0m[0m = "GeneralPurpose"
2020-12-24T16:01:39.1474446Z [1m[0mextended_auditing_policy[0m[0m = [
2020-12-24T16:01:39.1481428Z {
2020-12-24T16:01:39.1482165Z retention_in_days = 0
2020-12-24T16:01:39.1483057Z storage_account_access_key = ""
2020-12-24T16:01:39.1483679Z storage_account_access_key_is_secondary = false
2020-12-24T16:01:39.1484293Z storage_endpoint = ""
2020-12-24T16:01:39.1486841Z },
2020-12-24T16:01:39.1487323Z ]
2020-12-24T16:01:39.1488663Z [1m[0mid[0m[0m = "/subscriptions/78bc4018-84c1-4906-94c9-c1d5b84cc907/resourceGroups/rg-us-wus-dev-1/providers/Microsoft.Sql/servers/sql-us-wus-dev/databases/sqldb-us-wus-dev"
2020-12-24T16:01:39.1491489Z [1m[0mlocation[0m[0m = "westus"
2020-12-24T16:01:39.1492160Z [1m[0mmax_size_bytes[0m[0m = "34359738368"
2020-12-24T16:01:39.1492790Z [1m[0mname[0m[0m = "sqldb-us-wus-dev"
2020-12-24T16:01:39.1493436Z [1m[0mread_scale[0m[0m = false
2020-12-24T16:01:39.1494194Z [1m[0mrequested_service_objective_id[0m[0m = "f21733ad-9b9b-4d4e-a4fa-94a133c41718"
2020-12-24T16:01:39.1495057Z [1m[0mrequested_service_objective_name[0m[0m = "GP_Gen5_2"
2020-12-24T16:01:39.1495733Z [1m[0mresource_group_name[0m[0m = "rg-us-wus-dev-1"
2020-12-24T16:01:39.1496437Z [1m[0mserver_name[0m[0m = "sql-us-wus-dev"
2020-12-24T16:01:39.1497190Z [1m[0mtags[0m[0m = {}
2020-12-24T16:01:39.1497905Z [1m[0mzone_redundant[0m[0m = false
2020-12-24T16:01:39.1498494Z
2020-12-24T16:01:39.1498890Z threat_detection_policy {
2020-12-24T16:01:39.1499416Z [1m[0mdisabled_alerts[0m[0m = []
2020-12-24T16:01:39.1500074Z [1m[0memail_account_admins[0m[0m = "Disabled"
2020-12-24T16:01:39.1500670Z [1m[0memail_addresses[0m[0m = []
2020-12-24T16:01:39.1501143Z [1m[0mretention_days[0m[0m = 0
2020-12-24T16:01:39.1501574Z [1m[0mstate[0m[0m = "Disabled"
2020-12-24T16:01:39.1502069Z [1m[0muse_server_default[0m[0m = "Disabled"
2020-12-24T16:01:39.1502411Z }
2020-12-24T16:01:39.1502594Z
2020-12-24T16:01:39.1502851Z timeouts {}
2020-12-24T16:01:39.1503112Z }
2020-12-24T16:01:39.1503279Z
2020-12-24T16:01:39.1503637Z [1m # azurerm_sql_server.sqlserver[0m will be updated in-place[0m[0m
2020-12-24T16:01:39.1504503Z [0m [33m~[0m[0m resource "azurerm_sql_server" "sqlserver" {
2020-12-24T16:01:39.1504979Z [1m[0madministrator_login[0m[0m = "sqladmin"
2020-12-24T16:01:39.1505483Z [32m+[0m [0m[1m[0madministrator_login_password[0m[0m = (sensitive value)
2020-12-24T16:01:39.1506007Z [1m[0mconnection_policy[0m[0m = "Default"
2020-12-24T16:01:39.1506451Z [1m[0mextended_auditing_policy[0m[0m = [
2020-12-24T16:01:39.1506802Z {
2020-12-24T16:01:39.1507156Z retention_in_days = 0
2020-12-24T16:01:39.1507611Z storage_account_access_key = ""
2020-12-24T16:01:39.1508130Z storage_account_access_key_is_secondary = false
2020-12-24T16:01:39.1508695Z storage_endpoint = "https://stuxxwusdev.blob.core.windows.net/"
2020-12-24T16:01:39.1509179Z },
2020-12-24T16:01:39.1509442Z ]
2020-12-24T16:01:39.1510082Z [1m[0mfully_qualified_domain_name[0m[0m = "sql-us-wus-dev.database.windows.net"
2020-12-24T16:01:39.1511114Z [1m[0mid[0m[0m = "/subscriptions/78bc4018-84c1-4906-94c9-c1d5b84cc907/resourceGroups/rg-us-wus-dev-1/providers/Microsoft.Sql/servers/sql-us-wus-dev"
2020-12-24T16:01:39.1511895Z [1m[0mlocation[0m[0m = "westus"
2020-12-24T16:01:39.1512415Z [1m[0mname[0m[0m = "sql-us-wus-dev"
2020-12-24T16:01:39.1512991Z [1m[0mresource_group_name[0m[0m = "wus-dev"
2020-12-24T16:01:39.1513500Z [1m[0mtags[0m[0m = {}
2020-12-24T16:01:39.1514036Z [1m[0mversion[0m[0m = "12.0"
2020-12-24T16:01:39.1514327Z
2020-12-24T16:01:39.1514602Z timeouts {}
2020-12-24T16:01:39.1514890Z }
There are terraform plan symbol meanings, refer to this.
+ create
- destroy
-/+ replace (destroy and then create, or vice-versa if create-before-destroy is used)
~ update in-place i.e. change without destroying
<= read
You can check the ~ mark line to check that the specific attributes will be updated in place.
For example, it will update the retention_in_days from 6 to 0 in the terraform template code.
Please let me know if you still have any questions.
When running terraform apply against the following it keeps asking me for variable input on the CLI instead of accepting from the file, if I remove the variables from the .tf file and just leave the first one in for the ami it works with some massaging. Any ideas?
contents of dev.tf:
variable "aws_region" {}
variable "instance_type" {}
variable "key_name" {}
variable "vpc_security_group_ids" {}
variable "subnet_id" {}
variable "iam_instance_profile" {}
variable "tag_env" {}
provider "aws" {
region = "${var.aws_region}"
}
data "aws_ami" "amazon_linux" {
most_recent = true
filter {
name = "name"
values = [
"amzn-ami-hvm-*-x86_64-gp2",
]
}
filter {
name = "owner-alias"
values = [
"amazon",
]
}
}
resource "aws_instance" "kafka" {
ami = "${data.aws_ami.amazon_linux.id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.subnet_id}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${var.vpc_security_group_ids}"]
iam_instance_profile = "${var.iam_instance_profile}"
user_data = <<-EOF
#!/bin/bash
sudo yum -y install telnet
EOF
tags {
ProductCode = "id"
InventoryCode = "id"
Environment = "${var.tag_env}"
}
}
contents of dev.tfvars:
aws_region = "us-east-1"
tag_env = "dev"
instance_type = "t2.large"
subnet_id = "subnet-id"
vpc_security_group_ids = "sg-id , sg-id"
key_name = "id"
iam_instance_profile = "id"
Ah good catch, changed the filename to terraform.tfvars and it now works.