Create Cloud Composer environment with a private repository using Terraform - terraform

I'm trying to create a Cloud Composer environment with a PyPI package from a private repository using Terraform. Cloud Composer supports private PyPI repositories. However, configuring a private repository requires an existing composer bucket.
When using Terraform to create an environment, the bucket and environment is created in one go. As far as I can see, the environment creation will fail before there is a chance to write the configuration file to the bucket. Is there any way to create a Cloud Composer environment with a private repository using Terraform?
This is roughly what I'm trying to do:
resource "google_composer_environment" "test" {
provider = google-beta
project = var.project_id
region = var.region
config {
software_config {
image_version = "composer-2.0.0-airflow-2.1.4"
pypi_packages = {
mypackage = "*" # from a private PyPI repo
}
...

I'm leaving this as community wiki response for community visibility for this kind of questions. For features, I suggest is to go directly to the project site and request the feature there:
Terraform Provider Issues
This will alert the developers of the missing/requested features and it will be taking into consideration. For this case you can track the progress of this case on the issue link you created. Also, you can see the list of upconming features/bugs inside the project goals dashboard but as mention in the description the ETA is up to the team to provide.

Related

How to enable billing to GCP project using terraform

I am trying to enable billing for GCP projects using terraform but the project was created using GCP console.
I am getting error like project already exists, Is there any way to enable project using terraform for existing project ?
resource "google_project" "my_project" {
name = "ML Cluster"
project_id = "ml-cluster"
org_id = "XXXXXXXXXXXX"
billing_account = "XXXXXXXXXXXXXX"
}
You have to import your existing project into the Terraform state first, so that the imported project has the correct identifier (google_project.my_project). Once that's done, you can apply your Terraform configuration to enable billing for the project.

Terraform wants to recreate imported resources

Locally I:
Created main.tf
Initialize with ‘terraform init’
Imported GCP project and Google Run service
Updated main.tf so ‘terraform plan’ was not trying to do anything.
Checked main.tf to GitHub
I setup GitHub actions so:
Checkout
Setup Gcloud
Initialize with ‘terraform init’
Plan with ‘terraform plan’
Terraform plan is trying to recreate everything.
How do I make it detect existing resources?
By default Terraform will initialise a local state. The problem with this state is that it will be available only for you on your PC. If you execute a plan somewhere else, this state will be lost. To solve this issue, you need to set up a remote backend for Terraform for being able to store the state file in a centralised location.
If you are using Google Cloud, you can use a Cloud Store bucket for storing the state file. Terraform offers gcs module for being able to configure this backend using Cloud Store. You have to create a bucket and provide the bucket name to the gcs backend configuration:
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}

Terraform cloud config dynamic workspace name

I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.

terraform maven coordinates azure artifacts

I am trying to get Azure Artifacts (maven library install on a Databricks cluster). I am trying to follow the terraform documentation but I struggle to get right coordinates. Do you know what is the correct URL for the Azure Artifact ?
library {
maven {
coordinates = "com.amazon.deequ:deequ:1.0.4"
}
}
I just learned by reading documentation that currently databricks labs does not provide network authentication, so it will not work in private Azure Artifacts.

Switching Terraform cloud workspaces in GitHub Actions/Terraform CLI

We're in the middle of working on a small proof of concept project which will deploy infrastructure to Azure using Terraform. Our Terraform source is held in GitHub and we've using Terraform cloud as the backend to store our state, secrets etc.
Within Terraform cloud we've created two workspaces, one for the staging environment and one for the production environment.
So far we've used the guide on the Terraform docs to develop a GitHub action which triggers on a push to the main branch and deploys our infrastructure to the staging environment. This all works great and we can see our state held in Terraform cloud.
The next hurdle is to promote our changes into the production environment.
Unfortunately we've hit a brick wall trying to figure out how to dynamically change the Terraform cloud workspace within the GitHub action so it's operating on production and not staging. I've spent most of the day looking into this with little joy.
For reference the Terraform backend is currently configured as follows:
terraform {
backend "remote" {
organization = "terraform-organisation-name"
workspaces {
name = "staging-workspace-name"
}
}
}
The action itself does an init and then and apply.
Obviously with the workspace name hardcoded this will only work on staging. Ultimately the questions comes down to how to parameterise or dynamically change the Terraform cloud workspace from the command line?
I feel I'm missing something fundamental and any help or suggestions would be greatly appreciated.

Resources