I just want to know How to create multiple Vms using terraform libvirt provider ?
I managed to create one vm , is there a way to do it in one tf file
# libvirt.tf
# add the provider
provider "libvirt" {
uri = "qemu:///system"
}
#create pool
resource "libvirt_pool" "ubuntu" {
name = "ubuntu-pool"
type = "dir"
path = "/libvirt_images/ubuntu_pool/"
}
# create image
resource "libvirt_volume" "image-qcow2" {
name = "ubuntu-amd64.qcow2"
pool = libvirt_pool.ubuntu.name
source ="${path.module}/downloads/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
I used cloud config to define the user and ssh connection
I want to complete this code so I can create multiple VM
Related
I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:
I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7
The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.
I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.
No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13
I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.
I'm brand new to terraform, and I'm utilizing terragrunt to help me get things rolling. I have a decent amount of infrastructure to migrate and get set up w/ terraform, but I'm getting my feet underneath me first. We have multiple VPC's in different regions with a lot of the same security group rules used i.e.(web, db, etc..) that I want to replicate across each region.
I have a simple example of how I currently have an EC2 module setup to recreate the security group rules and was wondering if there's a better way to organize this code so I don't have to create a new module for the same SG rule for each region? i.e. some smart way to utilize lists for my vpc's, providers, etc...
since this is just one SG rule across two regions, I'm trying to avoid this growing ugly as we scale up to even more regions and I input multiple SG rules
My state is currently being stored in S3 and in this setup I pull the state so I can access the VPC outputs from another module I used to create the VPC's
terraform {
backend "s3" {}
}
provider "aws" {
version = "~> 1.31.0"
region = "${var.region}"
profile = "${var.profile}"
}
provider "aws" {
version = "~> 1.31.0"
alias = "us-west-1"
region = "us-west-1"
profile = "${var.profile}"
}
#################################
# Data sources to get VPC details
#################################
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "${var.vpc_remote_state_bucket}"
key = "${var.vpc_remote_state_key}"
region = "${var.region}"
profile = "${var.profile}"
}
}
#####################
# Security group rule
#####################
module "east1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_east_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_east_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
# List of maps
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
module "west1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Your current setup uses two times the same module differing in the provider. You can pass down multiple providers to the module (see the documentation). Then, within the module, you can use the same variables you specified once in your main document to create all the instances you need.
However, since you are using one separate provider for each resource type, you have to have at least some code duplication down the line.
Your code could then look something like this
module "vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws.main = "aws"
aws.secondary = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Inside your module you can then use the main and secondary provider to deploy all your required resources.