Creating a GCP Cloud Composer V2 instance via Terraform - terraform

I am trying to provision a Cloud Composer V2 instance via Terraform.
Terraform version: 1.1.3
Provider versions:
hashicorp/google: ~> 3.87.0
My tf code is as below:
resource "google_composer_environment" "cc_foo_uat_airflow" {
name = "cc-foo-uat-airflow"
region = var.region
project = var.project_id
provider = google-beta
config {
node_config {
zone = var.primary_zone
network = google_compute_network.foo_uat_composer.id
subnetwork = google_compute_subnetwork.foo_uat_composer.id
service_account = module.sa_foo_uat_airflow_runner.id
}
software_config {
image_version = var.image_version
python_version = var.python_version
airflow_config_overrides = {
secrets-backend = "airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend"
webserver-expose_config = "True"
}
}
}
}
Relevant variables are below:
variable "image_version" {
default = "composer-2.0.1-airflow-2.1.4"
}
variable "python_version" {
default = "3"
}
Running terraform init via the CLI produces a valid plan, but my build on Terraform cloud fails with the following error:
Error: googleapi: Error 400: Found 1 problem: 1) Configuring node location is not supported for Cloud Composer environments in versions 2.0.0 and newer., badRequest
with google_composer_environment.cc_foo_uat_airflow
on main.tf line 100, in resource "google_composer_environment" "cc_foo_uat_airflow":
resource "google_composer_environment" "cc_foo_uat_airflow" {
I cannot discern from this error message what portion of my TF code is invalid. I cannot remove the zone block from the node_config section, as it is required. I cannot figure out what is causing this error.
Edit: anonymized a missing reference to a proper noun

We're using "terraform-google-composer2.0" module and our .yaml file looks like this
module: "terraform-google-composer2.0"
version: "1.0.0"
name: XXXXX
image_version: composer-2.0.0-airflow-2.1.4
network: XXXXX
subnetwork: composer-XXXXX
region: us-east1
service_account: XXXXXXX
environment_size: ENVIRONMENT_SIZE_LARGE
scheduler_cpu: 2
scheduler_memory_gb: 4
scheduler_storage_gb: 4
scheduler_count: 4
web_server_cpu: 2
web_server_memory_gb: 4
web_server_storage_gb: 4
worker_cpu: 2
worker_max_count: 100
worker_min_count: 3
worker_memory_gb: 4
airflow_config_overrides:
scheduler-catchup_by_default: false
scheduler-dag_dir_list_interval: 180

Related

Terraform: Set an AWS Resource's provider value via a module variable

I have created a module I want to use across multiple providers (just two AWS providers for 2 regions). How can I set a resource's provider value via variable from a calling module? I am calling a module codebuild.tf (which I want to be region agnostic) from a MGMT module named cicd.tf - Folder structure:
main.tf
/MGMT/
-> cicd.tf
/modules/codebuild/
-> codebuild.tf
main.tf:
terraform {
required_version = ">= 1.0.10"
backend "s3" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# default AWS provider for MGMT resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
}
# DEV Account resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-east-1"
}
# DEV Account resources in us-west-2 and global
provider "aws" {
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-west-2"
}
module "MGMT" {
source = "./MGMT"
count = var.aws_env == "MGMT" ? 1 : 0
aws_env = var.aws_env
}
When I build my TF, its under the MGMT AWS account which uses the the default aws provider that doesn't have an alias - I am then trying to set a provider with an AWS IAM Role (that's cross account) when I am calling the module (I made the resource a module because I want to run it in multiple regions):
/MGMT/cicd.tf:
# DEV in cicd.tf
# create the codebuild resource in the assumed role's us-east-1 region
module "dev_cicd_iac_us_east_1" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-east-1"
input_aws_env = var.dev_aws_env
}
# create the codebuild resource in the assumed role's us-west-2 region
module "dev_cicd_iac_us_west_2" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-west_2"
input_aws_env = var.dev_aws_env
}
/modules/codebuild/codebuild.tf:
# Code Build resource here
variable "input_aws_provider" {}
variable "input_aws_env" {}
resource "aws_codebuild_project" "codebuild-iac" {
provider = tostring(var.input_aws_provider) # trying to make it a string, with just the var there it looks for a var provider
name = "${var.input_aws_env}-CodeBuild-IaC"
# etc...
}
I get the following error when I plan the above:
│ Error: Invalid provider reference
│ On modules/codebuild/codebuild.tf line 25: Provider argument requires
│ a provider name followed by an optional alias, like "aws.foo".
How can I make the provider value a proper reference to the aws provider defined in main.tf while still using a MGMT folder/module file named cicd.tf?

Terraform reports error "Failed to query available provider packages"

I have created main.tf file as below for Mongodb terraform module.
resource "mongodbatlas_teams" "test" {
org_id = null
name = "MVPAdmin_Team"
usernames = ["user1#email.com", "user2#email.com", "user3#email.com"]
}
resource "mongodbatlas_project" "test" {
name = "MVP_Project"
org_id = null
teams {
team_id = null
role_names = ["GROUP_CLUSTER_MANAGER"]
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = null
ip_address = null
comment = "IP address for MVP Dev cluster testing"
}
resource "mongodbatlas_cluster" "test" {
name = "MVP_DevCluster"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
cluster_type = REPLICASET
state_name = var.state_name
replication specs {
num_shards= var.num_shards
region_config {
region_name = "AU-EA"
electable_nodes = var.electable_nodes
priority = var.priority
read_only_nodes = var.read_only_nodes
}
}
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
mongo_db_major_version = var.mongo_db_major_version
provider_name = "Azure"
provider_disk_type_name = var.provider_disk_type_name
provider_instance_size_name = var.provider_instance_size_name
mongodbatlas_database_user {
username = var.username
password = var.password
auth_database_name = var.auth_database_name
role_name = var.role_name
database_name = var.database_name
}
mongodbatlas_database_snapshot_backup_policy {
policy_item = var.policy_item
frequency_type = var.frequency_type
retention_value = var.retention_value
}
advanced_configuration {
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
connection_string = var.connection_string
}
}
However, terraform init reports as below:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mongodbatlas: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/mongodbatlas
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use mongodb/mongodbatlas? If so, you must specify that
source address in each module which requires that provider. To see which
modules are currently depending on hashicorp/mongodbatlas, run the following
command:
terraform providers
Any idea as to what is going wrong?
The error message explains the most likely reason for seeing this error message: you've upgraded directly from Terraform v0.12 to Terraform v0.14 without running through the Terraform v0.13 upgrade steps.
If you upgrade to Terraform v0.13 first and follow those instructions then the upgrade tool should be able to give more specific instructions on what to change here, and may even be able to automatically upgrade your configuration for you.
However, if you wish then you can alternatively manually add the configuration block that the v0.13 upgrade tool would've inserted, to specify that you intend to use the mongodb/mongodbatlas provider as "mongodbatlas" in this module:
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
}
There are some other considerations in the v0.13 upgrade guide that the above doesn't address, so you may still need to perform the steps described in that upgrade guide if you see different error messages after trying what I showed above.

terraform - error on creating AWS Elastic Beanstalk

I am trying to provision an AWS Elastic Beanstalk using terrafrom. Below is the .tf file I have written:
resource "aws_s3_bucket" "default" {
bucket = "textX"
}
resource "aws_s3_bucket_object" "default" {
bucket = "${aws_s3_bucket.default.id}"
key = "test-app-version-tf--dev"
source = "somezipFile.zip"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-name"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
description = "test"
application = "${aws_elastic_beanstalk_application.tftest.name}"
name = "synchronicity-dev"
cname_prefix = "ops-api-opstest"
solution_stack_name = "64bit Amazon Linux 2 v5.0.1 running Node.js 12"
tier = "WebServer"
wait_for_ready_timeout = "20m"
}
According to the official documentation, I am supplying all the Required arguments to aws_elastic_beanstalk_environment module.
However, upon executing the script, I am getting the following error:
Error waiting for Elastic Beanstalk Environment (e-39m6ygzdxh) to
become ready: 2 errors occurred:
* 2020-05-13 12:59:02.206 +0000 UTC (e-3xff9mzdxh) : You must specify an Instance Profile for your EC2 instance in this region. See
Managing Elastic Beanstalk Instance
Profiles
for more information.
* 2020-05-13 12:59:02.319 +0000 UTC (e-3xff9mzdxh) : Failed to launch environment.
This worked for me: add the setting below to your aws_elastic_beanstalk_environment resource:
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
....
....
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
}
More information on general settings here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html
Info on the aws_elastic_beanstalk_environment: https://www.terraform.io/docs/providers/aws/r/elastic_beanstalk_environment.html
resource "aws_elastic_beanstalk_application" "tftest" {
name = "name****"
description = "create elastic beanstalk applications"
tags = {
"Name" = "name*****"
}
}
resource "aws_elastic_beanstalk_environment" "env" {
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2 v3.3.15 running PHP 8.0"
name = "env-application"
tier = "WebServer"
setting {
namespace = "aws:ec2:vpc"
name = "Vpcid"
value = "mention vpc id"
}
setting {
namespace="aws:ec2:vpc"
name="Subnets"
value="mention subnet id"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
}

AwsBackUp supports cross region copy in terraform

Does terraform support aws backup feature for cross region copy (https://www.terraform.io/docs/providers/aws/r/backup_plan.html )?
As I read the document I can see that it does support.
But I get the following error:
Error: Unsupported argument
on backup_plan.tf line 11, in resource "aws_backup_plan" "example":
11: copy_action = {
An argument named "copy_action" is not expected here.
My terraform file for your reference
resource "aws_backup_plan" "example" {
name = "example-plan"
rule {
rule_name = "MainRule"
target_vault_name = "primary"
schedule = "cron(5 8 * * ? *)"
start_window = 480
completion_window = 10080
lifecycle {
delete_after = 30
}
copy_action {
destination_vault_arn = "arn:aws:backup:us-west-2:123456789:backup-vault:secondary"
}
}
}
But when I remove the block
copy_action {
destination_vault_arn = "arn:aws:backup:us-west-2:123456789:backup-vault:secondary"
}
It works just fine
Thanks
I assume you are running a version of the Terraform AWS Provider of 2.57.0 or older.
Version 2.58.0 (released 3 days ago) brought support for the copy_action:
resource/aws_backup_plan: Add rule configuration block copy_action configuration block (support cross region copy)
You can specify in your code to require at least this version as follows:
provider "aws" {
version = "~> 2.58.0"
}

Error while installing Helm chart using Terraform helm provider

I am trying to install helm chart with Terraform Helm Provider using the following terraform script
I'm already succeed to use Kubernetes provider to deploy some k8s ressources, but it doesn't work with Helm
terraform v0.11.13
provider.helm v0.10
provider.kubernetes v1.9
provider "helm" {
alias = "prdops"
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
host = "${google_container_cluster.prdops.endpoint}"
alias = "prdops"
load_config_file = false
username = "${google_container_cluster.prdops.master_auth.0.username}"
password = "${google_container_cluster.prdops.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.prdops.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.cluster_ca_certificate)}"
}
}
resource "kubernetes_service_account" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "tiller"
}
subject {
kind = "ServiceAccount"
name = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
api_group = ""
}
}
resource "helm_release" "jenkins" {
provider = "helm.prdops"
name = "jenkins"
chart = "stable/jenkins"
}
but I'm geting the following error
1 error(s) occurred:
* helm_release.jenkins: 1 error(s) occurred:
* helm_release.jenkins: rpc error: code = Unknown desc = configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Helm uses a server component (in Helm v2, they are getting rid of it in the new Helm v3) called tiller. In order for helm to function, tiller is assigned a service account to interact with the Kubernetes API. In this case it seems the service account of tiller has insufficient permissions to perform the operation.
Kindly check if tiller pod is running in kube-system namespace. If not reinstall helm and do helm init so that tiller pod comes up and I hope this issue will be resolved.

Resources