Terraform resource lifecycle destroy_after_create? - terraform

Is there an option in terraform configuration that would automatically destroy the resource after its dependents have been created? I am thinking something like destroy_after_create lifecycle which doesn't exist.
I want to delete all Lambda archives (s3 objects) after the Lambda Functions get created. Obviously I can create a script to run "terraform destroy -target" after "apply" completes, however I am looking for something within terraform configuration itself.

To hack your way in Terraform, you can use the following combination:
local-exec provisioner
null-resource
AWS CLI - aws s3api delete-object
Terraform depends_on
Like this
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
}
resource "null_resource" "delete_lambda" {
depends_on = [
aws_s3_bucket.my_bucket,
# other dependencies ...
]
provisioner "local-exec" {
command = "aws s3api delete-object --bucket my-bucket --key lambda.zip"
}
}

Related

'terraform init' returns 404 'Resource Group not found' when it does exist

Since adding backend "azurerm" to my Terraform main.tf file it's now returning a 404 on the resource group created to maintain the state file.
I'm at a bit of a loss to explain why, the session is logged in to the correct tenant and subscription using Connect-AzAccount and Set-AzContext methods in the Az PowerShell module.
Here's my setup:
main.tf
## Terraform Configuration
terraform {
# Azure Remote State
backend "azurerm" {
resource_group_name = "abc-uat-tfstate"
storage_account_name = "abcuattfstate"
container_name = "tfstate"
key = "myapp.uat.tfstate"
}
# Provider Dependencies
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.0"
}
}
}
## Provider Configurations
# Azure
provider "azurerm" {
subscription_id = var.subscriptionId
features {}
}
...
When I run terraform init on this main.tf file I receive the following error:
However note, I can immediately run Get-AzResourceGroup and it returns the group as I see it in Azure Portal.
Until I added the backend it was creating resources correctly so I'm thinking this is a simple configuration issue but after reviewing all the docs don't see what I've got wrong.
Ok, operator error as I suspected.
Running az login --tenant '...' and then az account set --subscrption '...' resolved the problem. terraform init now works correctly.
I should have thought about this earlier.

Cannot override provider configuration error in installing DC/OS via Terraform on azure

Working on setting up DC/OS on Microsoft Azure using Terraform.
I'm using the main.tf provided in the official documentation. Every time I run terraform init
I get an error:
Error: Cannot override provider configuration
│
│ on .terraform/modules/dcos/main.tf line 138, in module "dcos-infrastructure":
│ 138: azurerm = azurerm
I have authenticated via az CLI specifically via the command
az login --use-device-code
My terraform version is:
Terraform v1.1.9
on linux_amd64
How can I resolve this?
my attempts to comment out the providers still produces this error.
If you are using module in you main.tf file, You have to use `Provider Aliases in main file
To declare multiple configuration names for a provider within a module, add the configuration_aliases argument:
You can take reference of this aws and edit it for azure accordignly.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
configuration_aliases = [ aws.alternate ]
}
}
}
Passing Providers Explicitly
main.tf
provider "aws" {
alias = "usw2"
region = "us-west-2"
}
# An example child module is instantiated with the alternate configuration,
# so any AWS resources it defines will use the us-west-2 region.
module "example" {
source = "./example"
providers = {
aws.alternate = aws.usw2
}
}
You can refer this document for information, How to use provider when using module in main.tf

Azure Recovery Services Vault with Terraform local provisioner

Terraform doesn't provide options to change the Azure recovery Services Vault to use LocallyRedundant storage replication type. So I decided to use the PowerShell module to set this after the resource is provisioned. The command seems to be correct and works when it's manually invoked but doesn't when it's put in the provisioner. Any thoughts?
Terraform Version : 0.15
Azurerm Version : 2.40.0
resource "azurerm_recovery_services_vault" "RSV"{
name = "RSV"
location = "eastus"
resource_group_name = "RGTEST"
sku = "Standard"
provisioner "local-exec" {
command = "Get-AzRecoveryServicesVault -Name ${azurerm_recovery_services_vault.RSV.name} | Set-AzRecoveryServicesBackupProperty -BackupStorageRedundancy LocallyRedundant"
interpreter = ["powershell", "-Command"]
}
}
The PowerShell scripts rely on the resource "azurerm_recovery_services_vault" that is fully created. In this case, if you include the local-exec Provisioner in a null_resource, run terraform init and terraform apply again, it works.
Note that even though the resource will be fully created when the
provisioner is run, there is no guarantee that it will be in an
operable state
resource "null_resource" "script" {
provisioner "local-exec" {
command = "Get-AzRecoveryServicesVault -Name ${azurerm_recovery_services_vault.RSV.name} | Set-AzRecoveryServicesBackupProperty -BackupStorageRedundancy LocallyRedundant"
interpreter = ["powershell", "-Command"]
}
}

Google Cloud Composer using Terraform

I am new to Terraform, is there any straight forward way to manage and create Google Cloud Composer environment using Terraform?
I checked the supported list of components for GCP seems like Google Cloud Composer is not there as of now. As a work around I am thinking of creating a shell script including required gcloud composer cli commands and run it using Terraform, is it a right approach? Please suggest alternatives.
Google Cloud Composer is now supported in Terraform: https://www.terraform.io/docs/providers/google/r/composer_environment
It can be used as below
resource "google_composer_environment" "test" {
name = "my-composer-env"
region = "us-central1"
}
That is an option. You can use a null_resource and local-exec to run commands:
resource "null_resource" "composer" {
provisioner "local-exec" {
inline = [
"gcloud beta composer <etc..>"
]
}
}
Just keep in mind when using local-exec:
Note that even though the resource will be fully created when the
provisioner is run, there is no guarantee that it will be in an
operable state
It looks like Google Cloud Composer is really new and still in beta. Hopefully Terraform will support it in the future.
I found that I had to use a slightly different syntax with the provisioner that included a command parameter.
resource "null_resource" "composer" {
provisioner "local-exec" {
command = "gcloud composer environments create <name> --project <project> --location us-central1 --zone us-central1-a --machine-type n1-standard-8"
}
}
While this works, it is disconnected from the actual resource state in GCP. It'll rely on the state file to say whether it exists, and I found I had to taint it to get the command to run again.

Delay of 10mn between resource creation with Terraform Vsphere provider

Is there a way to delay module resource creation with the Terraform Vsphere provider. I want to introduce a 10mn delay due to infrastructure impediments between each VM instance creation. Each one is created by a module occurrence.
At the moment, Terraform is doing its best to deploy at maximum speed!
I tried depends_on with module: no way.
Versions used:
vsphere 6.0
terraform 0.11.3
provider.vsphere v 1.3.2
You could use a provisioner within the instance and have some kind of sleep command there, before the next VM instance is created.
resource "vsphere_virtual_machine" "vpshere_build_machine" {
provisioner "local-exec" {
command = "ping 127.0.0.1 -n 10 > nul" #or sleep 10
}
In my case I could solve it by doing this trick:
"sleep [s] && command"
For example:
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook -i ..."
}

Resources