Terraform eks datasource vpc subnets security group - amazon-rds

I have a terraform script which creates an eks cluster
I have another terraform script which creates a rds ,I want this rds to be created
in the same VPC as the eks cluster .
data "aws_eks_cluster" "example" {
name = "example"
}
output "subnets" {
value = "${data.aws_eks_cluster.example.vpc_config.vpc_id}"
}
here is my rds.tf
resource "aws_db_instance" "rds" {
allocated_storage = "${var.rds_allocated_storage}"
storage_type = "${var.rds_storage_type}"
engine = "${var.rds_engine}"
engine_version = "${var.rds_engine_version}"
instance_class = "${var.rds_instance_class}"
name = "${var.project_name}_${var.env}_data_rds${var.rds_engine}"
username = "dbadmin"
password = "${var.rds_db_password}"
multi_az = false
skip_final_snapshot = true
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet.name}"
vpc_security_group_ids = "${var.rds_vpc_security_group_ids}"
identifier = "${var.project_name}-${var.env}-data-rds${var.rds_engine}"
I want to get db_subnet_group_name and vpc_security_group_ids from my eks
and not from variables.tf

I believe you need something like
vpc_security_group_ids = "${data.aws_eks_cluster.example.vpc_config.0.security_group_ids}"

Related

How to create an Aurora replica of a PostgreSQL RDS database using Terraform?

I've been tasked with updating our Postgres database to Aurora Postgres, and the first step is throwing me off. We use Terraform to manage our infrastructure, have a primary and a single replica of the database. So far, I've run into many problems using Terraform to do what is really easy in the console.
Desired outcome: run terraform apply and have a freshly created Aurora Postgres replica show up and be readable by our applications.
Code (I left a bunch of potentially useless info, but wanted to make it as easy as possible to understand):
resource "aws_db_instance" "rds_postgres" {
identifier = "postgres-database"
allocated_storage = "250"
engine = "postgres"
engine_version = "11.6"
instance_class = "db.t3.large"
name = "prod_db"
username = "admin"
password = "p#ssw0rd"
db_subnet_group_name = "subnet-group-name"
vpc_security_group_ids = ["sg-xxxxxxx"]
storage_encrypted = true
}
resource "aws_db_instance" "rds_postgres_r1" {
identifier = "postgres-database-r1"
allocated_storage = "250"
engine = "postgres"
engine_version = "11.6"
instance_class = "db.t3.large"
name = "prod_db"
username = "admin"
vpc_security_group_ids = ["sg-xxxxxxx"]
replicate_source_db = "${aws_db_instance.rds_postgres.identifier}"
storage_encrypted = true
}
resource "aws_rds_cluster" "default" {
cluster_identifier = "postgres-aurora-cluster"
database_name = "prod_db"
master_username = "admin"
master_password = "p#ssw0rd"
storage_encrypted = true
kms_key_id = "alias/kms/rds"
vpc_security_group_ids = ["sg-xxxxxxx"]
db_subnet_group_name = "subnet-group-name"
engine = "aurora-postgresql"
engine_version = "11.6"
replication_source_identifier = "${aws_db_instance.rds_postgres.arn}"
lifecycle {
ignore_changes = [
"id",
"kms_key_id",
"cluster_identifier"
]
}
}
resource "aws_rds_cluster_instance" "default" {
identifier = "postgres-aurora-1"
cluster_identifier = "${aws_rds_cluster.default.id}"
instance_class = "db.t3.large"
db_subnet_group_name = "subnet-group-name"
publicly_accessible = false
engine = "aurora-postgresql"
engine_version = "11.6"
lifecycle {
ignore_changes = [
"identifier",
"cluster_identifier"
]
}
}
Terraform apply will faithfully create a new cluster and single instance initially labeled as a Reader class, and replicate data. However, after the replication is complete, the new aurora instance swaps to Writer.
What am I doing wrong? My next steps are promoting this new Aurora cluster to the primary DB, scaling up readers, and cutting the original Postgres primary out completely, but I am blocked. I'm not a Terraform wizard.

How re-attache ebs volume using terraform

I'm trying to keep AWS EBS volume as a persistent data-store, every week my AMI changes so I have to spin-up new VM in aws. At this time I'm expecting my volume to detach from the old VM and attach to a new VM without destroying the EBS volume and data.
resource "aws_instance" "my_instance" {
count = var.instance_count
ami = lookup(var.ami,var.aws_region)
instance_type = var.instance_type
key_name = aws_key_pair.terraform-demo.key_name
subnet_id = aws_subnet.main-public-1.id
// user_data = "${file("install_apache.sh")}"
tags = {
Name = "Terraform-${count.index + 1}"
Batch = "5AM"
}
}
variable "instances" {
type = map
default = {
"xx" = "sss-console"
"4xx" = "sss-upload-port"
"xxx" = "sss"
}
}
resource "aws_kms_key" "cmp_kms" {
description = "ssss-ebsencrypt"
tags = local.all_labels
}
resource "aws_ebs_volume" "volumes" {
count = var.instance_count
availability_zone = element(aws_instance.my_instance.*.availability_zone, count.index )
encrypted = true
kms_key_id = aws_kms_key.cmp_kms.arn
size = local.volume_size
type = local.volume_type
iops = local.volume_iops
// tags = merge(var.extra_labels, map("Name", "${var.cell}-${element(local.volume_name, count.index)}"))
lifecycle {
// prevent_destroy = true
ignore_changes = [kms_key_id, instance_id]
}
}
resource "aws_volume_attachment" "volumes-attachment" {
depends_on = [aws_instance.my_instance, aws_ebs_volume.volumes]
count = var.instance_count
device_name = "/dev/${element(local.volume_name, count.index)}"
volume_id = element(aws_ebs_volume.volumes.*.id, count.index)
instance_id = element(aws_instance.my_instance.*.id, count.index)
force_detach = true
}
ERROR on terraform apply
Error: Unsupported attribute
on instance.tf line 71, in resource "aws_ebs_volume" "volumes":
71: ignore_changes = [kms_key_id, instance_id]
This object has no argument, nested block, or exported attribute named
"instance_id".
earlier the same code use to work with terraform v0.11 but it's not working with v0.12. what is the replacement for this or how can we re-attach EBS to a different machine without destroying it?
As per terraform documentation, they do not expose any attribute named as instance_id for resource aws_ebs_volume.
For reference: https://www.terraform.io/docs/providers/aws/d/ebs_volume.html.
You can specify the instance_id at the time of volume attachment using resource
aws_volume_attachment.
You can refer the answer given in https://gitter.im/hashicorp-terraform/Lobby?at=5ab900eb2b9dfdbc3a237e36 for more information.

Terraform - A managed resource has not been declared in the root module

i'm trying create ec2 instance and setup load balancer using terraform but i'm facing follwing error. How to create instance and configure load balacer in a single main.tf file?
Error: Reference to undeclared resource
"aws_lb_target_group" "front-end":27: vpc_id = "${aws_vpc.terrafom-elb.id}"
A managed resource "aws_vpc" "terrafom-elb" has not been declared in the root
module.source`
code:
region = "us-east-1"
access_key = "*********************"
secret_key = "**********************"
}
resource "aws_instance" "terraform" {
ami = "ami-07ebfd5b3428b6f4d"
instance_type = "t2.micro"
security_groups = ["nodejs","default"]
tags = {
Name = "terrafom-elb"
}
}
resource "aws_lb" "front-end"{
name = "front-end-lb"
internal = false
security_groups = ["nodejs"]
}
resource "aws_lb_target_group" "front-end" {
name = "front-end"
port = 8989
protocol = "HTTP"
vpc_id = "${aws_vpc.terrafom-elb.id}"
depends_on = [aws_instance.terraform]
}
There's a typo where you're assigning the vpc_id:
vpc_id = "${aws_vpc.terrafom-elb.id}"
should be:
vpc_id = "${aws_vpc.terraform-elb.id}"
note the missing 'r' in the word 'terraform'
You can add a data structure to the top and pass VPC ID as variable:
data "aws_vpc" "selected" {
id = var.vpc_id
}
And reference it as vpc_id = data.aws_vpc.selected.id

Terraform resource recreation dynamic AWS RDS instance counts

I have a question relating to AWS RDS cluster and instance creation.
Environment
We recently experimented with:
Terraform v0.11.11
provider.aws v1.41.0
Background
Creating some AWS RDS databases. Our mission was that in some environment (e.g. staging) we may run fewer instances than in others (e.g. production.). With this in mind and not wanting to have totally different terraform files per environment we instead decided to specify the database resources just once and use a variable for the number of instances which is set in our staging.tf and production.tf files respectively for the number of instances.
Potentially one more "quirk" of our setup, is that the VPC in which the subnets exist is not defined in terraform, the VPC already existed via manual creation in the AWS console, so this is provided as a data provider and the subnets for the RDS are specific in terraform - but again this is dynamic in the sense that in some environments we might have 3 subnets (1 in each AZ), whereas in others perhaps we have only 2 subnets. Again to achieve this we used iteration as shown below:
Structure
|-/environments
-/staging
-staging.tf
-/production
-production.tf
|- /resources
- database.tf
Example Environment Variables File
dev.tf
terraform {
terraform {
backend "s3" {
bucket = "my-bucket-dev"
key = "terraform"
region = "eu-west-1"
encrypt = "true"
acl = "private"
dynamodb_table = "terraform-state-locking"
}
version = "~> 0.11.8"
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
version = "~> 1.33"
allowed_account_ids = ["XXX"]
}
module "main" {
source = "../../resources"
vpc_name = "test"
test_db_name = "terraform-test-db-dev"
test_db_instance_count = 1
test_db_backup_retention_period = 7
test_db_backup_window = "00:57-01:27"
test_db_maintenance_window = "tue:04:40-tue:05:10"
test_db_subnet_count = 2
test_db_subnet_cidr_blocks = ["10.2.4.0/24", "10.2.5.0/24"]
}
We came to this module based structure for environment isolation mainly due to these discussions:
https://github.com/hashicorp/terraform/issues/18632#issuecomment-412247266
https://github.com/hashicorp/terraform/issues/13700
https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces
Our Issue
Initial resource creation works fine, our subnets are created, the database cluster starts up.
Our issues start the next time we subsequently run a terraform plan or terraform apply (with no changes to the files), at which point we see interesting things like:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
module.main.aws_rds_cluster.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
availability_zones.#: "3" => "1" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "" (forces new resource)
availability_zones.3953592328: "eu-west-1a" => "eu-west-1a"
availability_zones.94988580: "eu-west-1c" => "" (forces new resource)
and
module.main.aws_rds_cluster_instance.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
cluster_identifier: "terraform-test-db-dev" => "${aws_rds_cluster.test_db.id}" (forces new resource)
Something about the way we are approaching this appears to be causing terraform to believe that the resource has changed to such an extent that it must destroy the existing resource and create a brand new one.
Config
variable "aws_availability_zones" {
description = "Run the EC2 Instances in these Availability Zones"
type = "list"
default = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
}
variable "test_db_name" {
description = "Name of the RDS instance, must be unique per region and is provided by the module config"
}
variable "test_db_subnet_count" {
description = "Number of subnets to create, is provided by the module config"
}
resource "aws_security_group" "test_db_service" {
name = "${var.test_db_service_user_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group" "test_db" {
name = "${var.test_db_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group_rule" "test_db_ingress_app_server" {
security_group_id = "${aws_security_group.test_db.id}"
...
source_security_group_id = "${aws_security_group.test_db_service.id}"
}
variable "test_db_subnet_cidr_blocks" {
description = "Cidr block allocated to the subnets"
type = "list"
}
resource "aws_subnet" "test_db" {
count = "${var.test_db_subnet_count}"
vpc_id = "${data.aws_vpc.vpc.id}"
cidr_block = "${element(var.test_db_subnet_cidr_blocks, count.index)}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
}
resource "aws_db_subnet_group" "test_db" {
name = "${var.test_db_name}"
subnet_ids = ["${aws_subnet.test_db.*.id}"]
}
variable "test_db_backup_retention_period" {
description = "Number of days to keep the backup, is provided by the module config"
}
variable "test_db_backup_window" {
description = "Window during which the backup is done, is provided by the module config"
}
variable "test_db_maintenance_window" {
description = "Window during which the maintenance is done, is provided by the module config"
}
data "aws_secretsmanager_secret" "test_db_master_password" {
name = "terraform/db/test-db/root-password"
}
data "aws_secretsmanager_secret_version" "test_db_master_password" {
secret_id = "${data.aws_secretsmanager_secret.test_db_master_password.id}"
}
data "aws_iam_role" "rds-monitoring-role" {
name = "rds-monitoring-role"
}
resource "aws_rds_cluster" "test_db" {
cluster_identifier = "${var.test_db_name}"
engine = "aurora-mysql"
engine_version = "5.7.12"
# can only request to deploy in AZ's where there is a subnet in the subnet group.
availability_zones = "${slice(var.aws_availability_zones, 0, var.test_db_instance_count)}"
database_name = "${var.test_db_schema_name}"
master_username = "root"
master_password = "${data.aws_secretsmanager_secret_version.test_db_master_password.secret_string}"
preferred_backup_window = "${var.test_db_backup_window}"
preferred_maintenance_window = "${var.test_db_maintenance_window}"
backup_retention_period = "${var.test_db_backup_retention_period}"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
storage_encrypted = true
kms_key_id = "${data.aws_kms_key.kms_rds_key.arn}"
deletion_protection = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
vpc_security_group_ids = ["${aws_security_group.test_db.id}"]
final_snapshot_identifier = "test-db-final-snapshot"
}
variable "test_db_instance_count" {
description = "Number of instances to create, is provided by the module config"
}
resource "aws_rds_cluster_instance" "test_db" {
count = "${var.test_db_instance_count}"
identifier = "${var.test_db_name}"
cluster_identifier = "${aws_rds_cluster.test_db.id}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
instance_class = "db.t2.small"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
monitoring_interval = 60
engine = "aurora-mysql"
engine_version = "5.7.12"
monitoring_role_arn = "${data.aws_iam_role.rds-monitoring-role.arn}"
tags {
Name = "test_db-${count.index}"
}
}
My question is, is there a way to achieve this so that terraform would not try to recreate the resource (e.g. ensure that the availability zones of the cluster and ID of the instance do not change each time we run terraform.
Turns out that simply by just removing the explicit availability zones definitions from the aws_rds_cluster and aws_rds_cluster_instance then this issue goes away and everything so far appears to work as expected. See also https://github.com/terraform-providers/terraform-provider-aws/issues/7307#issuecomment-457441633

what's better way to store master_username and master_password in terraform rds configuration?

I need help on what's the better way to store the master_password in https://www.terraform.io/docs/providers/aws/r/rds_cluster.html. Currently I masked with XXX before commit to the github. Could you please advise a better way to store this? Thanks
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "bar"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
Here is my solution.
(anyway, you can't hide the password its *.tfstate file, you should save the tfstate files in s3 with encrypted)
terraform plan --var-file master_username=xxxx --var-file master_password=xxxx
So your tf file can define them as variables.
variable "master_username" {}
variable "master_password" {}
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
database_name = "mydb"
master_username = "${var.master_username}"
master_password = "${var.master_password}"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
When run terraform plan/apply with CI/CD pipeline, I used to put the passwords to s3 (encrypted) or SSM (now you can put in aws secrets manager). So write a wrapper script to get these key/value first and feed to terraform command.
Preferable you use AWS Secrets Manager and not store any sensitive data in the state file at all (even though your state file might be encrypted, it's a good practice to avoid such implementations).
Here I've created a secrets.tf that's responsible for generating a random password and storing it in AWS Secrets Manager.
resource "random_password" "master"{
length = 16
special = true
override_special = "_!%^"
}
resource "aws_secretsmanager_secret" "password" {
name = "${var.environment}-${var.service_name}-primary-cluster-password"
}
resource "aws_secretsmanager_secret_version" "password" {
secret_id = aws_secretsmanager_secret.password.id
secret_string = random_password.master.result
}
Now to get the data during the tf apply, I use a data provider (this gets data from AWS on the fly during the apply):
data "aws_secretsmanager_secret" "password" {
name = "${var.environment}-${var.service_name}-primary-cluster-password"
}
data "aws_secretsmanager_secret_version" "password" {
secret_id = data.aws_secretsmanager_secret.password.id
}
I then reference them in my Aurora cluster as following:
resource "aws_rds_global_cluster" "main" {
global_cluster_identifier = "${var.environment}-${var.service_name}-global-cluster"
engine = "aurora"
engine_version = "5.6.mysql_aurora.1.22.2"
database_name = "${var.environment}-${var.service_name}-global-cluster"
}
resource "aws_rds_cluster" "primary" {
provider = aws
engine = aws_rds_global_cluster.main.engine
engine_version = aws_rds_global_cluster.main.engine_version
cluster_identifier = "${var.environment}-${var.service_name}-primary-cluster"
master_username = "dbadmin"
master_password = data.aws_secretsmanager_secret_version.password
database_name = "example_db"
global_cluster_identifier = aws_rds_global_cluster.main.id
db_subnet_group_name = "default"
}
Hopefully this helps.

Resources