I want to crate a gitlab project from a template via terafrom code.
resource "gitlab_project" "services_projects" {
for_each = local.service_projects
name = "${each.key}"
default_branch = "main"
description = ""
issues_enabled = false
merge_requests_enabled = false
namespace_id = "${gitlab_group.services_group.id}"
snippets_enabled = false
visibility_level = "private"
wiki_enabled = false
use_custom_template = "${each.value.use_custom_template}"
template_project_id = "${each.value.template_project_id}"
group_with_project_templates_id = var.group_with_project_templates_id
}
This works, all my projects were crated and also the project to be created from template is created from the template. But then terraform errors with:
module.services["test_tf_1"].gitlab_project.services_projects["values"]: Destruction complete after 5s
75module.services["test_tf_1"].gitlab_project.services_projects["values"]: Creating...
76╷
77│ Error: error while waiting for project "values" import to finish: unexpected state 'failed', wanted target 'finished'. last error: %!s(<nil>)
78│
79│ with module.services["test_tf_bb_1"].gitlab_project.services_projects["values"],
80│ on modules/gitlab/main.tf line 132, in resource "gitlab_project" "services_projects":
81│ 132: resource "gitlab_project" "services_projects" {
82│
83╵
85
Cleaning up project directory and file based variables
00:01
87ERROR: Job failed: command terminated with exit code 1
Does anyone knows where this error comes from or how I can solve it?
I think it has something to do with the missing template_link but I don't really understand the concept. Creating one does not work.
https://registry.terraform.io/providers/gitlabhq/gitlab/latest/docs/resources/group_project_file_template
I think it's an issue of Gitlab itself, not your code/terraform provider.
Please see https://gitlab.com/gitlab-org/gitlab/-/issues/208452
It might be a flaky bug. In my case it helped simply to re-run the same code.
Related
Just tried to create say 2 sets of resources using the same registry module which contains Oracle cloud compartments (multi level).
see Module link
I needed 2 subcompartments because set #2 is a child of set #1
example :
Terraform v1.0.3
module "main_compartment" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
tenancy_ocid = var.tenancy_ocid
compartment_id = var.tenancy_ocid # define the parent compartment. Creation at tenancy root if omitted
compartment_name = "mycomp"
compartment_description = "main compartment at root level"
compartment_create = true
enable_delete = true
}
}
module "level_1_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l1_subcomp
compartment_id = module.iam_compartment_main_compartment.compartment_id # define the parent compartment. Here we make reference to the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
}
...}
module "level_2_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l2_subcomp
compartment_id = data.oci_identity_compartments.compx.id # define the parent compartment. Here we make reference to one of the l1 subcomp created in the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
depends_on = [module.level_1_sub_compartments,]
....}
When I run a terraform init I get as many folders than module blocks ? why would I call them this way?
Why not download a single module manually and then reference it 3 times as local modules .
Or better off writing dynamic blocks in the main.tf using regular compartment resource ?
Initializing modules...
Downloading oracle-terraform-modules/iam/oci 2.0.2 for iam_compartment_main...
. main_compartment in .terraform/modules/main_compartment/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_1_sub_compartments...
. level_1_sub_compartments in .terraform/modules/level_1_sub_compartments/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_2_sub_compartments...
. level_2_sub_compartments in .terraform/modules/level_2_sub_compartments/modules/iam-compartment
There are some problems with the configuration, described below.
...(for each module)=> Error: Duplicate required providers configuration
A module may have only one required providers configuration. The required providers were previously configured at .terraform/modules/level_1_sub_compartments/modules/iam-compartment/main.tf:5,3-21.
What I wanted is to reuse one registry module through URL source but only have one physical folder in my working directory.
I just expected it to work but it seems local Modules are the only working option for this goal.If there is anything I'm doing wrong please let me know as the provider error is also coming from the fact that I have multiple directories having the same module config. thank you
I was trying to deploy official helm chart for airflow using terraform helm release. But it says the chart repository not found. i put the repository url. May be the url is wrong.
repository = "https://airflow.apache.org"
chart = "apache-airflow"
name = "airflow"
version = "1.7.0"
namespace = "airflow"
values = [file("${path.module}/values.yaml")]
}
Getting this error message:
Error: could not download chart: chart "apache-airflow" version "1.7.0" not found in https://airflow.apache.org repository
I figured it out. As per instructions from the artifacthub.io [1], the chart name is actually only airflow (and not apache-airflow), so the code needs to look like:
resource "helm_release" "airflow" {
repository = "https://airflow.apache.org"
chart = "airflow"
name = "airflow"
version = "1.7.0"
namespace = "airflow"
values = [file("${path.root}/airflow-values.yaml")]
}
where the file aiflow-values.yaml is the one from their documentation used when installing with terraform [2].
[1] https://artifacthub.io/packages/helm/apache-airflow/airflow?modal=install
[2] https://airflow.apache.org/docs/helm-chart/stable/index.html#installing-the-chart-with-argo-cd-flux-or-terraform
I'm setting up an Azure CDN Front Door Profile using Terraform.
I'm having a problem with Terraform thinking that my routes have been changed every time I run a plan, even though they haven't been modified:
# azurerm_cdn_frontdoor_route.main-fe-resources will be updated in-place
~ resource "azurerm_cdn_frontdoor_route" "main-fe-resources" {
~ cdn_frontdoor_origin_group_id = "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourcegroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/origingroups/main-fe" -> "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourceGroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/originGroups/main-fe"
id = "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourceGroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/afdEndpoints/ci-main/routes/main-fe-resources"
name = "main-fe-resources"
# (8 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
The problems seems to be related to casing discrepancies between "resourceGroups" / "resourcegroups" and "originGroups" / "origingroups".
I've tried lowercasing the origin group ID in the Terraform script, but Terraform then complains that the ID doesn't contain the required string "originGroups".
I'm creating the routes like so:
resource "azurerm_cdn_frontdoor_route" "main-fe-resources" {
name = "main-fe-resources"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.main.id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.main-fe.id
cdn_frontdoor_origin_ids = []
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "MatchRequest"
https_redirect_enabled = true
patterns_to_match = ["/assets-2022/*", "/_next/*"]
supported_protocols = ["Http", "Https"]
}
Any ideas?
So it does appear to be a bug in the provider. I originally created the routes manually, then added them to the Terraform state. I found that if I delete the routes and let Terraform recreate them then the problem goes away.
It's not an ideal solution, but at least the Terraform plans no longer detect changes when there aren't any.
I'm trying to use output values from a 2nd workspace in my current one.
For example:
data "tfe_outputs" "EKS" {
organization = "EKS_Deploy"
workspace = "EKS_Deploy"
}
Then I need EKS cluster ID in one of my modules from that 2nd workspace (I already set up outputs):
2nd workspace
output "eks_cluster_id" {
description = "EKS Cluster ID"
value = module.eks-ssp.eks_cluster_id
}
1st workspace
eks_cluster_id = data.tfe_outputs.EKS.eks_cluster_id
But, running a terraform apply in the second workspace throws this:
Error: Unsupported attribute
on main.tf line 22, in data "aws_eks_cluster" "cluster":
name = data.tfe_outputs.EKS.eks_cluster_id
This object has no argument, nested block, or exported attribute named "eks_cluster_id".
This is strange to me, since I can see the correct output value in my 2nd workspace, ie. it shows a proper output. So I'm guessing I'm calling it wrong somehow. What could it be?
I am declaring the following output in a TF module in the output.tf file:
output "jenkins_username" {
value = "${local.jenkins_username}"
description = "Jenkins admin username"
#sensitive = true
}
output "jenkins_password" {
value = "${local.jenkins_password}"
description = "Jenkins admin password"
#sensitive = true
}
The corresponding locals have been declared in main.tf as follows:
locals {
jenkins_username = "${var.jenkins_username == "" ? random_string.j_username.result : var.jenkins_username}"
jenkins_password = "${var.jenkins_password == "" ? random_string.j_password.result : var.jenkins_password}"
}
However, after the apply has finished, I see no relevant output, and what is more, it is not displayed even when I call the explicit output command:
$ terraform output jenkins_password
The output variable requested could not be found in the state
file. If you recently added this to your configuration, be
sure to run `terraform apply`, since the state won't be updated
with new output variables until that command is run.
I was having the exact same problem. What worked for me was to comment out the output variables in the first deployment, and uncomment them once that would succeed.