How to change project id of openstack glance images? [Terrafrom] - terraform

I would like to connect my glans image to a specific project.
How can I do this with terraform?
I use openstack_images_image_access_v2 but it does not work
resource "openstack_images_image_v2" "upload_amphora_images" {
for_each = var.regions
name = "amphora-x64-haproxy.qcow2"
image_source_url = "https://minio.services.osism.tech/openstack-octavia-amphora-image/octavia-amphora-haproxy-zed.qcow2"
container_format = "bare"
disk_format = "qcow2"
visibility = "shared"
tags = ["amphora",]
region = each.value.name
}
resource "openstack_images_image_access_v2" "amphora_images_member" {
for_each = var.regions
image_id = "${openstack_images_image_v2.upload_amphora_images[each.key].id}"
member_id = "${data.openstack_identity_project_v3.default[each.key].id}"
status = "accepted"
region = each.value.name
}
module.glance_images.openstack_images_image_v2.upload_amphora_images["region01"]: Creation complete after 1m24s [id=f0e972e3-5c8d-439f-95cc-ab1f82f13574] module.glance_images.openstack_images_image_access_v2.amphora_images_member["region01"]: Creating... module.glance_images.openstack_images_image_access_v2.amphora_images_member["region01"]: Creation complete after 0s [id=f0e972e3-5c8d-439f-95cc-ab1f82f13574/1118f006e7894ab8a2293506d9d870f6]
Could anyone you ask me?

Related

How to for_each and for loop for EC2, security group, and ebs volumes?

I'm new to Terraform and wanted to know if there's a way for me to do this. I want to create multiple EC2 instance with it's own security group, and attach volumes of different sizes and types.
module "ec2_sg" {
source = "../../modules/sgs"
for_each = var.ec2_info
name = each.value.name
description = "Security group for ${each.value.name}"
vpc_id = each.value.vpc
ingress_cidr_blocks = each.value.ingress_cidr_blocks
ingress_rules = each.value.ingress_rules
egress_cidr_blocks = each.value.egress_cidr_blocks
egress_rules = each.value.egress_rules
}
module "ec2_instance" {
source = "../../modules/ec2"
for_each = var.ec2_info
name = each.value.name
ami = var.AMIS.linux_ami
instance_type = each.value.ec2_instance_type
vpc_security_group_ids = module.ec2_sg.security_group_id[each.key]
}
resource "aws_ebs_volume" "volume_disk" {
for_each = var.ec2_info
type = each.value.type
iops = each.value.iops
availability_zone = each.value.availability_zone
size = each.value.size
}
resource "aws_volume_attachment" "volume_disk" {
for_each = var.ec2_info
device_name = each.value.device_name
volume_id = aws_ebs_volume.data1[each.key].id
instance_id = module.ec2_instance.id[each.key]
}
This is what I've tried so far but I can't get the volume part to work. The ec2_info contains information about the 2 different EC2 instances I want to create. What kind of data manipulation should I do to achieve this? Do you think using separate variable for the disks will achieve this? Something like:
resource "aws_ebs_volume" "volume_disk" {
for_each = var.disks_info
type = each.value.type
iops = each.value.iops
availability_zone = each.value.availability_zone
size = each.value.size
}
resource "aws_volume_attachment" "volume_disk" {
for_each = var.disks_info
device_name = each.value.device_name
volume_id = aws_ebs_volume.data1[each.key].id
instance_id = module.ec2_instance.id[each.key]
}
But if I do this? How do I ensure each disks connect to the intended EC2 instance? Thank you.
You can try to use the "count" parameter.
https://www.terraform.io/docs/language/meta-arguments/count.html

Automate the start/stop VMs during off-hours in Azure using terraform

I'm trying to automate the start/stop VMs during off-hours in Azure using Terraform. This is the way of automating it in Azure portal https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management I have done it once in azure portal but I want to do the same using terraform.
I've searched days to find out how to do this. I've found the same question asked by someone else before Create Azure Automation Start/Stop solution through Terraform but there was only one answer to that which is it's not possible since the Microsoft solution requires parameters on the runbooks, and there isn't any attributes in the provider to add parameters. But I'm not quite convinced with the answer.
I'm newish in Terraform and I know some resources like azurerm_automation_job_schedule and azurerm_automation_runbook must be used, but I couldn't figure out the whole module to do this. Has anyone done anything like this before?
I think the post is a bit old now but I am responding if this is going to help someone trying to figure out the solution to pass parameter for the runbook. You can pass the required parameter in this resource provider "azurerm_automation_job_schedule". Please note the Parameters attribute this is how we can pass the required parameter. You can refer this link for more details.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/automation_job_schedule
resource "azurerm_automation_job_schedule" "startvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstartvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Start"
}
depends_on = [azurerm_automation_schedule.scheduledstartvm]
}
Below is the complete code for VM Start/Stop job schedule resource provider "azurerm_automation_schedule" and "azurerm_automation_job_schedule"
resource "azurerm_automation_schedule" "scheduledstartvm" {
name = "StartVM"
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
frequency = "Day"
interval = 1
timezone = "America/Chicago"
start_time = "2021-09-20T13:00:00Z"
description = "Run every day"
}
resource "azurerm_automation_job_schedule" "startvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstartvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Start"
}
depends_on = [azurerm_automation_schedule.scheduledstartvm]
}
resource "azurerm_automation_schedule" "scheduledstopvm" {
name = "StopVM"
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
frequency = "Day"
interval = 1
timezone = "America/Chicago"
start_time = "2021-09-20T10:30:00Z"
description = "Run every day"
}
resource "azurerm_automation_job_schedule" "stopvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstopvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Stop"
}
depends_on = [azurerm_automation_schedule.scheduledstopvm]
}

Using "count" in aws_route53_record terrafrom resource

I'm starting to use (and learn) terraform, for now, I need to create multiple DO droplets and attach them to the aws route53 zone, what I'm trying to do:
My DO terraform file:
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = var.do_token
}
# Create a new tag
resource "digitalocean_tag" "victor" {
name = "victor-fee1good22"
}
resource "digitalocean_droplet" "web" {
count = 2
image = var.do_config["image"]
name = "web-${count.index}"
region = var.do_config["region"]
size = var.do_config["size"]
ssh_keys = [var.public_ssh_key, var.pv_ssh_key]
tags = [digitalocean_tag.victor.name]
}
My route53 file:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
access_key = var.aws_a_key
secret_key = var.aws_s_key
}
data "aws_route53_zone" "selected" {
name = "devops.rebrain.srwx.net"
}
resource "aws_route53_record" "www" {
сount = length(digitalocean_droplet.web)
zone_id = data.aws_route53_zone.selected.zone_id
name = "web_${count.index}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.web[count.index].ipv4_address]
}
But I always get The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set. error, what did I wrong?
Thanks!
UPDATE:
I've resolved this one like — add сount = 2 instead of сount = length(digitalocean_droplet.web)
It works but would be better to have the dynamic variable instead of constant count. :)
you want to get number of services, that not yet created. Terraform couldn't do that.
As I think simplest way use common var with the number of droplets.
resource "digitalocean_droplet" "test" {
count = var.number_of_vps
image = "ubuntu-18-04-x64"
name = "test-1"
region = data.digitalocean_regions.available.regions[0].slug
size = "s-1vcpu-1gb"
}
resource "aws_route53_record" "test" {
count = var.number_of_vps
zone_id = data.aws_route53_zone.primary.zone_id
name = "${local.login}-${count.index}.${data.aws_route53_zone.primary.name}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.test[count.index].ipv4_address]
}
This trick helped - https://github.com/hashicorp/terraform/issues/12570#issuecomment-291517239
resource "aws_route53_record" "dns" {
count = "${length(var.ips) > 0 ? length(var.domains) : 0}"
// ...
}

Terraform resource recreation dynamic AWS RDS instance counts

I have a question relating to AWS RDS cluster and instance creation.
Environment
We recently experimented with:
Terraform v0.11.11
provider.aws v1.41.0
Background
Creating some AWS RDS databases. Our mission was that in some environment (e.g. staging) we may run fewer instances than in others (e.g. production.). With this in mind and not wanting to have totally different terraform files per environment we instead decided to specify the database resources just once and use a variable for the number of instances which is set in our staging.tf and production.tf files respectively for the number of instances.
Potentially one more "quirk" of our setup, is that the VPC in which the subnets exist is not defined in terraform, the VPC already existed via manual creation in the AWS console, so this is provided as a data provider and the subnets for the RDS are specific in terraform - but again this is dynamic in the sense that in some environments we might have 3 subnets (1 in each AZ), whereas in others perhaps we have only 2 subnets. Again to achieve this we used iteration as shown below:
Structure
|-/environments
-/staging
-staging.tf
-/production
-production.tf
|- /resources
- database.tf
Example Environment Variables File
dev.tf
terraform {
terraform {
backend "s3" {
bucket = "my-bucket-dev"
key = "terraform"
region = "eu-west-1"
encrypt = "true"
acl = "private"
dynamodb_table = "terraform-state-locking"
}
version = "~> 0.11.8"
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
version = "~> 1.33"
allowed_account_ids = ["XXX"]
}
module "main" {
source = "../../resources"
vpc_name = "test"
test_db_name = "terraform-test-db-dev"
test_db_instance_count = 1
test_db_backup_retention_period = 7
test_db_backup_window = "00:57-01:27"
test_db_maintenance_window = "tue:04:40-tue:05:10"
test_db_subnet_count = 2
test_db_subnet_cidr_blocks = ["10.2.4.0/24", "10.2.5.0/24"]
}
We came to this module based structure for environment isolation mainly due to these discussions:
https://github.com/hashicorp/terraform/issues/18632#issuecomment-412247266
https://github.com/hashicorp/terraform/issues/13700
https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces
Our Issue
Initial resource creation works fine, our subnets are created, the database cluster starts up.
Our issues start the next time we subsequently run a terraform plan or terraform apply (with no changes to the files), at which point we see interesting things like:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
module.main.aws_rds_cluster.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
availability_zones.#: "3" => "1" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "" (forces new resource)
availability_zones.3953592328: "eu-west-1a" => "eu-west-1a"
availability_zones.94988580: "eu-west-1c" => "" (forces new resource)
and
module.main.aws_rds_cluster_instance.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
cluster_identifier: "terraform-test-db-dev" => "${aws_rds_cluster.test_db.id}" (forces new resource)
Something about the way we are approaching this appears to be causing terraform to believe that the resource has changed to such an extent that it must destroy the existing resource and create a brand new one.
Config
variable "aws_availability_zones" {
description = "Run the EC2 Instances in these Availability Zones"
type = "list"
default = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
}
variable "test_db_name" {
description = "Name of the RDS instance, must be unique per region and is provided by the module config"
}
variable "test_db_subnet_count" {
description = "Number of subnets to create, is provided by the module config"
}
resource "aws_security_group" "test_db_service" {
name = "${var.test_db_service_user_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group" "test_db" {
name = "${var.test_db_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group_rule" "test_db_ingress_app_server" {
security_group_id = "${aws_security_group.test_db.id}"
...
source_security_group_id = "${aws_security_group.test_db_service.id}"
}
variable "test_db_subnet_cidr_blocks" {
description = "Cidr block allocated to the subnets"
type = "list"
}
resource "aws_subnet" "test_db" {
count = "${var.test_db_subnet_count}"
vpc_id = "${data.aws_vpc.vpc.id}"
cidr_block = "${element(var.test_db_subnet_cidr_blocks, count.index)}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
}
resource "aws_db_subnet_group" "test_db" {
name = "${var.test_db_name}"
subnet_ids = ["${aws_subnet.test_db.*.id}"]
}
variable "test_db_backup_retention_period" {
description = "Number of days to keep the backup, is provided by the module config"
}
variable "test_db_backup_window" {
description = "Window during which the backup is done, is provided by the module config"
}
variable "test_db_maintenance_window" {
description = "Window during which the maintenance is done, is provided by the module config"
}
data "aws_secretsmanager_secret" "test_db_master_password" {
name = "terraform/db/test-db/root-password"
}
data "aws_secretsmanager_secret_version" "test_db_master_password" {
secret_id = "${data.aws_secretsmanager_secret.test_db_master_password.id}"
}
data "aws_iam_role" "rds-monitoring-role" {
name = "rds-monitoring-role"
}
resource "aws_rds_cluster" "test_db" {
cluster_identifier = "${var.test_db_name}"
engine = "aurora-mysql"
engine_version = "5.7.12"
# can only request to deploy in AZ's where there is a subnet in the subnet group.
availability_zones = "${slice(var.aws_availability_zones, 0, var.test_db_instance_count)}"
database_name = "${var.test_db_schema_name}"
master_username = "root"
master_password = "${data.aws_secretsmanager_secret_version.test_db_master_password.secret_string}"
preferred_backup_window = "${var.test_db_backup_window}"
preferred_maintenance_window = "${var.test_db_maintenance_window}"
backup_retention_period = "${var.test_db_backup_retention_period}"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
storage_encrypted = true
kms_key_id = "${data.aws_kms_key.kms_rds_key.arn}"
deletion_protection = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
vpc_security_group_ids = ["${aws_security_group.test_db.id}"]
final_snapshot_identifier = "test-db-final-snapshot"
}
variable "test_db_instance_count" {
description = "Number of instances to create, is provided by the module config"
}
resource "aws_rds_cluster_instance" "test_db" {
count = "${var.test_db_instance_count}"
identifier = "${var.test_db_name}"
cluster_identifier = "${aws_rds_cluster.test_db.id}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
instance_class = "db.t2.small"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
monitoring_interval = 60
engine = "aurora-mysql"
engine_version = "5.7.12"
monitoring_role_arn = "${data.aws_iam_role.rds-monitoring-role.arn}"
tags {
Name = "test_db-${count.index}"
}
}
My question is, is there a way to achieve this so that terraform would not try to recreate the resource (e.g. ensure that the availability zones of the cluster and ID of the instance do not change each time we run terraform.
Turns out that simply by just removing the explicit availability zones definitions from the aws_rds_cluster and aws_rds_cluster_instance then this issue goes away and everything so far appears to work as expected. See also https://github.com/terraform-providers/terraform-provider-aws/issues/7307#issuecomment-457441633

Create Azure Automation Start/Stop solution through Terraform

I'm trying to set up machines to be automatically start/stopped using the newish Azure Automation add-in (https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management) with this being set up by Terraform.
I can create the automation account but I don't know how to create the start-stop functionality, can someone help fill in the blanks?
the AzureRM provider can manage aspects of runbooks. If you have a look at the documentation here. Using azurerm_automation_runbook and azurerm_automation_schedule you can create and schedule runbooks. The Microsoft solution requires parameters on the runbooks, I don't see any attributes in the provider to add parameters so this may not be possible.
You can pass the required parameter in this resource provider "azurerm_automation_job_schedule". Please note the Parameters attribute in the below code this is how we can pass the required parameter. You can refer this link for more details. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/automation_job_schedule
resource "azurerm_automation_job_schedule" "startvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstartvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Start"
}
depends_on = [azurerm_automation_schedule.scheduledstartvm]
}
Below is the complete code for VM Start/Stop job schedule resource provider "azurerm_automation_schedule" and "azurerm_automation_job_schedule"
resource "azurerm_automation_schedule" "scheduledstartvm" {
name = "StartVM"
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
frequency = "Day"
interval = 1
timezone = "America/Chicago"
start_time = "2021-09-20T13:00:00Z"
description = "Run every day"
}
resource "azurerm_automation_job_schedule" "startvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstartvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Start"
}
depends_on = [azurerm_automation_schedule.scheduledstartvm]
}
resource "azurerm_automation_schedule" "scheduledstopvm" {
name = "StopVM"
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
frequency = "Day"
interval = 1
timezone = "America/Chicago"
start_time = "2021-09-20T10:30:00Z"
description = "Run every day"
}
resource "azurerm_automation_job_schedule" "stopvm_sched" {
resource_group_name = "IndraTestRG"
automation_account_name = "testautomation"
schedule_name = azurerm_automation_schedule.scheduledstopvm.name
runbook_name = azurerm_automation_runbook.startstopvmrunbook.name
parameters = {
action = "Stop"
}
depends_on = [azurerm_automation_schedule.scheduledstopvm]
}

Resources