I am new to Terraform, is there any straight forward way to manage and create Google Cloud Composer environment using Terraform?
I checked the supported list of components for GCP seems like Google Cloud Composer is not there as of now. As a work around I am thinking of creating a shell script including required gcloud composer cli commands and run it using Terraform, is it a right approach? Please suggest alternatives.
Google Cloud Composer is now supported in Terraform: https://www.terraform.io/docs/providers/google/r/composer_environment
It can be used as below
resource "google_composer_environment" "test" {
name = "my-composer-env"
region = "us-central1"
}
That is an option. You can use a null_resource and local-exec to run commands:
resource "null_resource" "composer" {
provisioner "local-exec" {
inline = [
"gcloud beta composer <etc..>"
]
}
}
Just keep in mind when using local-exec:
Note that even though the resource will be fully created when the
provisioner is run, there is no guarantee that it will be in an
operable state
It looks like Google Cloud Composer is really new and still in beta. Hopefully Terraform will support it in the future.
I found that I had to use a slightly different syntax with the provisioner that included a command parameter.
resource "null_resource" "composer" {
provisioner "local-exec" {
command = "gcloud composer environments create <name> --project <project> --location us-central1 --zone us-central1-a --machine-type n1-standard-8"
}
}
While this works, it is disconnected from the actual resource state in GCP. It'll rely on the state file to say whether it exists, and I found I had to taint it to get the command to run again.
Related
I have a repo in which I have terraform infrastructure declared. I'm changing it by moving repeatable parts to modules and created folders for each environment. GitHub workflow is running init, plan and apply. As I have created new directories, I'm changing "working-directory" for init part, but I receive error Failed to get existing workspaces containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
I have arm access keys declared as envs in workflow. I tried to move it around but no luck. I dont know why terraform can initialise from main directory but can't initialise from child directory.
I tried to reproduce the same in my environment:
I got same error in case of storageaccount itself:
This error occurs when one doesn’t have access to the backend or the container where the terraform state is stored.
Please make sure that you are logged in to the subscription or tenant where you have access to resources.
In my case I logged in to another subscription that caused the error .
Set the subscription correctly.
az account set --subscription "xxx"
and then run terraform init
To reconfigure for new working directory :
Run terraform init -reconfigure
Or run below command to migratethe state:
terraform init -migrate-state
terraform {
backend "azurerm" {
resource_group_name = "rg"
storage_account_name = "remotestatekavstr"
container_name = "terraform"
key = "terraform.tfstate"
}
}
Then the terraform is initialized successfully:
Note:
1.Check for any spelling corrections of the storage account or container .
2.When changed to new directory , reconfigure the terraform backend or migrate .
Also check this creating-azure-storage-containers-in-a-storage-account-with-network-rules-with
Is there an option in terraform configuration that would automatically destroy the resource after its dependents have been created? I am thinking something like destroy_after_create lifecycle which doesn't exist.
I want to delete all Lambda archives (s3 objects) after the Lambda Functions get created. Obviously I can create a script to run "terraform destroy -target" after "apply" completes, however I am looking for something within terraform configuration itself.
To hack your way in Terraform, you can use the following combination:
local-exec provisioner
null-resource
AWS CLI - aws s3api delete-object
Terraform depends_on
Like this
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
}
resource "null_resource" "delete_lambda" {
depends_on = [
aws_s3_bucket.my_bucket,
# other dependencies ...
]
provisioner "local-exec" {
command = "aws s3api delete-object --bucket my-bucket --key lambda.zip"
}
}
I'm trying to upload my resource variables to terraform workspace using PowerShell script. In terraform plan task it is throwing the below error. When I verified the azure cli version it is running on 2.11. Any thoughts on this error. Thanks in advance.
enter image description here
Please run:
az login
and then try again
What specific changes need to be made to the terraform syntax below in order for the local-exec provisioner to be able to successfully run the az cli command?
Here is the terraform code that is causing the problem:
resource "azuredevops_git_repository" "repository" {
project_id = data.azuredevops_project.p.id
name = var.repoName
initialization {
init_type = "Uninitialized"
}
provisioner "local-exec" {
working_dir = "C:\\projects\\acm\\Apr2021\\config-outside-acm-path\\vars\\deleteThis\\"
command = "az repos import create --git-source-url \"${var.sourceRepo}\" --repository \"${azuredevops_git_repository.repository.name}\" --organization \"${var.azdoOrgServiceURL}\" --project \"${var.projectName}\""
}
}
Here is the error we are getting:
Error: Error running command
'az repos import create --git-source-url "https://github.com/PublicGitHubAccount/public-github-repo.git"
--repository "private-azure-repo" --organization "https://dev.azure.com/OurValidOrganizationName"
--project "SampleProject"'
: exit status 1.
Output: --organization must be specified.
The value should be the URI of your Azure DevOps organization, for example: https://dev.azure.com/MyOrganization/ or your Azure DevOps Server organization.
You can set a default value by running: az devops configure --defaults organization=https://dev.azure.com/MyOrganization/.
For auto detection to work (--detect true), you must be in a local Git directory that has a "remote" referencing a Azure DevOps or Azure DevOps Server repository.
When we copy the command from the error message and we run that exact command as a shell command through a Python program, the command runs properly without error. Here is the command that runs properly when executed using a Python shell:
'az repos import create --git-source-url "https://github.com/PublicGitHubAccount/public-github-repo.git"
--repository "private-azure-repo" --organization "https://dev.azure.com/OurValidOrganizationName"
--project "SampleProject"'
Therefore, the problem is that terraform is failing to see the --organization variable even though you can see from above that terraform is properly interpolating the string.
Try to omit the double quotes (") that surrounding the URL to see if it can work:
command = "az repos import create --git-source-url \"${var.sourceRepo}\" --repository \"${azuredevops_git_repository.repository.name}\" --organization ${var.azdoOrgServiceURL} --project \"${var.projectName}\""
I am trying to provision a storage account but running it results in error:
Error: Error reading static website for AzureRM Storage Account "sa12345461234512name":
accounts.Client#GetServiceProperties:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="AuthorizationPermissionMismatch"
Message="This request is not authorized to perform this operation using this permission.\n
RequestId:05930d46-301e-00ac-6d72-f021f0000000\n
Time:2020-03-02T09:09:44.9417598Z"
Running OS Windows 10 Pro.
Steps to replicate (in Powershell with Azure CLI installed)
az login
mkdir dummyFolder
cd dummyFolder
create config.tf
terraform init
terraform plan
terraform apply -auto-approve
Config.tf contents
# Configure the Azure Provider
provider "azurerm" {
version = "=2.0.0"
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "Australia East"
}
resource "azurerm_storage_account" "example" {
name = "sa12345461234512name"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
Not sure what i am missing, all other resources work fine, just the storage account.
This is a bug in the azure provider, see: https://github.com/terraform-providers/terraform-provider-azurerm/issues/5869
Update your provider; it doesn't seem to be related to the terraform version.
From:
# Configure the Azure Provider
provider "azurerm" {
# whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
version = "=2.0.0"
features {}
}
To:
provider "azurerm" {
version = "~> 2.1.0"
features {}
}
Just to add to this since none of above worked. In my case it first didn't work, then next day worked just to not work again in the evening... Not changing versions or anything, was same computer.
It turned out that my time settings on my Ubuntu running in Windows was skewed. Just simply running a sudo ntpdate time.nist.gov to update time solved the problem.
Found the issue. Its got to do with Terraform.
Just checked for updates and notices 0.12.21 is out (I was runnning 0.12.20).
Seems like if running AzureARM 2.0.0 then really need to be min 0.12.21 to make it work.
Same problem as #tesharp experienced.
On my Ubuntu WSL2 the following command fixed the problem:
sudo hwclock -s