Delay of 10mn between resource creation with Terraform Vsphere provider - delay

Is there a way to delay module resource creation with the Terraform Vsphere provider. I want to introduce a 10mn delay due to infrastructure impediments between each VM instance creation. Each one is created by a module occurrence.
At the moment, Terraform is doing its best to deploy at maximum speed!
I tried depends_on with module: no way.
Versions used:
vsphere 6.0
terraform 0.11.3
provider.vsphere v 1.3.2

You could use a provisioner within the instance and have some kind of sleep command there, before the next VM instance is created.
resource "vsphere_virtual_machine" "vpshere_build_machine" {
provisioner "local-exec" {
command = "ping 127.0.0.1 -n 10 > nul" #or sleep 10
}

In my case I could solve it by doing this trick:
"sleep [s] && command"
For example:
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook -i ..."
}

Related

I want to create 2 azure vm and install jenkins and Sonarqube using terraform does anyone know how to do that?

I have to deploy dot net core and React application on those one of those virtual machines
You can create the infrastructure using Terraform.
Use Ansible to configure Jenkins and Sonarqube to have a cleaner approach
refer below Code Snippet
provisioner "remote-exec" {
inline = ["sudo apt -y install python"]
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file(var.ssh_key_private)}"
}
}
provisioner "local-exec" {
command = "ansible-playbook -u ubuntu -i '${self.public_ip},' --private-key ${var.ssh_key_private} provision.yml"
}
Second Way would be to create a shell script and execute it using Terraform
provisioner "local-exec" {
command = "/bin/bash provision.sh"
}

Terraform resource lifecycle destroy_after_create?

Is there an option in terraform configuration that would automatically destroy the resource after its dependents have been created? I am thinking something like destroy_after_create lifecycle which doesn't exist.
I want to delete all Lambda archives (s3 objects) after the Lambda Functions get created. Obviously I can create a script to run "terraform destroy -target" after "apply" completes, however I am looking for something within terraform configuration itself.
To hack your way in Terraform, you can use the following combination:
local-exec provisioner
null-resource
AWS CLI - aws s3api delete-object
Terraform depends_on
Like this
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
}
resource "null_resource" "delete_lambda" {
depends_on = [
aws_s3_bucket.my_bucket,
# other dependencies ...
]
provisioner "local-exec" {
command = "aws s3api delete-object --bucket my-bucket --key lambda.zip"
}
}

Azure Recovery Services Vault with Terraform local provisioner

Terraform doesn't provide options to change the Azure recovery Services Vault to use LocallyRedundant storage replication type. So I decided to use the PowerShell module to set this after the resource is provisioned. The command seems to be correct and works when it's manually invoked but doesn't when it's put in the provisioner. Any thoughts?
Terraform Version : 0.15
Azurerm Version : 2.40.0
resource "azurerm_recovery_services_vault" "RSV"{
name = "RSV"
location = "eastus"
resource_group_name = "RGTEST"
sku = "Standard"
provisioner "local-exec" {
command = "Get-AzRecoveryServicesVault -Name ${azurerm_recovery_services_vault.RSV.name} | Set-AzRecoveryServicesBackupProperty -BackupStorageRedundancy LocallyRedundant"
interpreter = ["powershell", "-Command"]
}
}
The PowerShell scripts rely on the resource "azurerm_recovery_services_vault" that is fully created. In this case, if you include the local-exec Provisioner in a null_resource, run terraform init and terraform apply again, it works.
Note that even though the resource will be fully created when the
provisioner is run, there is no guarantee that it will be in an
operable state
resource "null_resource" "script" {
provisioner "local-exec" {
command = "Get-AzRecoveryServicesVault -Name ${azurerm_recovery_services_vault.RSV.name} | Set-AzRecoveryServicesBackupProperty -BackupStorageRedundancy LocallyRedundant"
interpreter = ["powershell", "-Command"]
}
}

How to run custom scripts post terraform vmware vm deployment?

I have been researching on this topic for over a week now and couldn't find any good solution neither on terraform documentation site nor on the web.
Main issue trying to solve right now is: how to run a custom powershell script at the end of terraform vmware basic windows server 2016 vm build.
Tried following methods:
remote-exec - fail
provisioners inside vm resource definition - fail
null resource - Error: timeout - last error: http response error: 401 - invalid content type
Here's my null resource definition right below vm resource build within the same main.tf file
resource "null_resource" "vm" {
triggers = {
public_ip = <host ip address>
}
connection {
type = "winrm"
host = <host ip address>
user = <username>
password = <password>
agent = false
}
provisioner "file" {
source = "userdata.ps1"
destination = "C:/Windows"
}
provisioner "remote-exec" {
inline = [
"powershell.exe -ExecutionPolicy Bypass -File C:/Windows/userdata.ps1"
]
}
}
Please suggest what are the recommended practices and your working solution

Google Cloud Composer using Terraform

I am new to Terraform, is there any straight forward way to manage and create Google Cloud Composer environment using Terraform?
I checked the supported list of components for GCP seems like Google Cloud Composer is not there as of now. As a work around I am thinking of creating a shell script including required gcloud composer cli commands and run it using Terraform, is it a right approach? Please suggest alternatives.
Google Cloud Composer is now supported in Terraform: https://www.terraform.io/docs/providers/google/r/composer_environment
It can be used as below
resource "google_composer_environment" "test" {
name = "my-composer-env"
region = "us-central1"
}
That is an option. You can use a null_resource and local-exec to run commands:
resource "null_resource" "composer" {
provisioner "local-exec" {
inline = [
"gcloud beta composer <etc..>"
]
}
}
Just keep in mind when using local-exec:
Note that even though the resource will be fully created when the
provisioner is run, there is no guarantee that it will be in an
operable state
It looks like Google Cloud Composer is really new and still in beta. Hopefully Terraform will support it in the future.
I found that I had to use a slightly different syntax with the provisioner that included a command parameter.
resource "null_resource" "composer" {
provisioner "local-exec" {
command = "gcloud composer environments create <name> --project <project> --location us-central1 --zone us-central1-a --machine-type n1-standard-8"
}
}
While this works, it is disconnected from the actual resource state in GCP. It'll rely on the state file to say whether it exists, and I found I had to taint it to get the command to run again.

Resources