I'd like to reference a local yaml file when creating a helm chart with cdktf.
I have the following cdktf config:
{
"language": "typescript",
"app": "npx ts-node main.ts",
"projectId": "...",
"terraformProviders": [
"hashicorp/aws#~> 3.42",
"hashicorp/kubernetes# ~> 2.7.0",
"hashicorp/http# ~> 2.1.0",
"hashicorp/tls# ~> 3.1.0",
"hashicorp/helm# ~> 2.4.1",
"hashicorp/random# ~> 3.1.0",
"gavinbunney/kubectl# ~> 1.14.0"
],
"terraformModules": [
{
"name": "secrets-store-csi",
"source": "app.terraform.io/goldsky/secrets-store-csi/aws",
"version": "0.1.5"
}
],
"context": {
"excludeStackIdFromLogicalIds": "true",
"allowSepCharsInLogicalIds": "true"
}
}
Note npx ts-node main.ts as the app.
In main.ts I have the following helm release
new helm.Release(this, "datadog-agent", {
chart: "datadog",
name: "datadog",
repository: "https://helm.datadoghq.com",
version: "3.1.3",
set: [
{
name: "datadog.clusterChecks.enabled",
value: "true",
},
{
name: "clusterAgent.enabled",
value: "true"
},
],
values: ["${file(\"datadog-values.yaml\")}"],
});
Note that I'm referencing a yaml file called datadog-values.yaml similar to this example from the helm provider.
datadog-values.yaml is a sister file to main.ts
However, when I try to deploy this with cdktf deploy I get the following error
│ Error: Invalid function argument
│
│ on cdk.tf.json line 1017, in resource.helm_release.datadog-agent.values:
│ 1017: "${file(\"datadog-values.yaml\")}"
│
│ Invalid value for "path" parameter: no file exists at
│ "datadog-values.yaml"; this function works only with files that are
│ distributed as part of the configuration source code, so if this file will
│ be created by a resource in this configuration you must instead obtain this
goldsky-infra-dev ╷
│ Error: Invalid function argument
│
│ on cdk.tf.json line 1017, in resource.helm_release.datadog-agent (datadog-agent).values:
│ 1017: "${file(\"datadog-values.yaml\")}"
│
│ Invalid value for "path" parameter: no file exists at
│ "datadog-values.yaml"; this function works only with files that are
│ distributed as part of the configuration source code, so if this file will
│ be created by a resource in this configuration you must instead obtain this
│ result from an attribute of that resource.
To run a deployment I execute npm run deploy:dev which is a customer script in my package.json:
"build": "tsc",
"deploy:dev": "npm run build && npx cdktf deploy",
How can I reference my datadog yaml file in a helm release like in the example shown by the helm provider?
To reference local files in CDKTF, you need to use assets. Assuming at the root level of your project there's a values folder where you store your values yaml file:
const valuesAsset = new TerraformAsset(this, 'values-asset', {
path: `${process.cwd()}/values/${this.chartValues}`,
type: AssetType.FILE,
});
new HelmRelease(this, 'helm-release', {
name: this.releaseName,
chart: this.chartName,
repository: this.chartRepository,
values: [ Fn.file(valuesAsset.path) ]
})
}
Note that I've used the file Terraform function to read the content of the file.
I have an issue in our environment where i cannot add a label to a vm instance in GCP via terraform/terragrunt after creation. We have a google repository that is setup via terraform and we use git to clone and update from a local repository, this will activate a trigger on cloudbuild to push the changes to the repo. We do not use terraform/grunt commands at all. It is all controlled via git. The labels are referenced in our compute module as shown.
variable "labels" {
description = "Labels to add."
type = map(string)
default = {}
}
Ok onto the issue. We have in our environment a mix of lift and shift and native cloud vm instances. We recently decided we wanted to add an additional label in the code to identify if the instance was under terraform control - ie terraform = "true/false"
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
So i add the label and use the usual git commands to add/commit push etc which triggers the cloudbuild as usual. The problem is, the label does not appear in the console when viewing it.
It's as if cloudbuild or terraform/terragrunt isn't recognising it as a change. I can change the value of a label no problem, but i cannot seem to add or remove a label after the vm has been created.
It has been suggested to run terraform/terragrunt plan in vs code but as mentioned, this has all been setup to use git so the above commands do not work.
For example i run terragrunt init in the directory and get this error
PS C:\Cloudrepos\placesforpeople> terragrunt init
time=2022-07-27T09:56:27+01:00 level=error msg=Error reading file at path C:/Cloudrepos/placesforpeople/terragrunt.hcl: open C:/Cloudrepos/placesforpeople/terragrunt.hcl: The system cannot find the
file specified.
time=2022-07-27T09:56:27+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople> cd org
PS C:\Cloudrepos\placesforpeople\org> cd rnd
PS C:\Cloudrepos\placesforpeople\org\rnd> cd adam_play_area
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 20/07/2022 14:18 modules
d----- 20/07/2022 14:18 test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> cd test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001> cd compute
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> ls
Directory: C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 07/07/2022 15:51 start_stop_schedule
d----- 20/07/2022 14:18 umig
-a---- 07/07/2022 16:09 1308 .terraform.lock.hcl
-a---- 27/07/2022 09:56 2267 terragrunt.hcl
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> terragrunt init
Initializing modules...
- data_disk in ..\compute_data_disk
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Reusing previous version of hashicorp/google-beta from the dependency lock file
╷
│ Warning: Backend configuration ignored
│
│ on ..\compute_data_disk\backend.tf line 3, in terraform:
│ 3: backend "gcs" {}
│
│ Any selected backend applies to the entire configuration, so Terraform
│ expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to
│ temporarily call a root module as a child module for testing purposes, but
│ this backend configuration block will have no effect.
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google-beta: could not connect to registry.terraform.io: Failed
│ to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
time=2022-07-27T09:57:40+01:00 level=error msg=Hit multiple errors:
Hit multiple errors:
exit status 1
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute>
But as mentioned, we dont use and have never used these commands to push the changes.
I cannot work out why these labels wont add/remove after the vm has already been created.
I have tried making a change to an instance to trigger the change such as increase the disk size.
I have tried to create a block in the module for all the labels needed but this doesn't work as you cannot have labels as a block in this module.
labels {
application = var.labels.application
businessunit = var.labels.businessunit
costcentre = var.labels.costcentre
createdby = var.labels.createdby
department = var.labels.department
disasterrecovery = var.labels.disasterrecovery
environment = var.labels.environment
contact = var.labels.contact
terraform = var.labels.terraform
}
}
Any ideas? I know you cannot add a label to a project post creation, does the same apply to vm instances? Is there any alternative method i can test?
As requested this is the code for the vm instance
terraform {
source = "../../modules//compute_instance_static_ip/"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders("org.hcl")
}
dependency "project" {
config_path = "../project"
# Configure mock outputs for the terraform commands that are returned when there are no outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
project_id = "project-not-created-yet"
}
}
prevent_destroy = false
inputs = {
gcp_instance_sa_email = "testprj-compute#gc-r-prj-testprj-0001-9627.iam.gserviceaccount.com" # This well tell gcp to use the default GCE service account
instance_name = "rnd-demo-test1"
network = "projects/gc-a-prj-vpchost-0001-3312/global/networks/gc-r-vpc-0001"
subnetwork = "projects/gc-a-prj-vpchost-0001-3312/regions/europe-west2/subnetworks/gc-r-snet-middleware-0001"
zone = "europe-west2-c"
region = "europe-west2"
project = dependency.project.outputs.project_id
os_image = "debian-10-buster-v20220118"
machine_type = "n1-standard-4"
boot_disk_size = 100
instance_scope = ["cloud-platform"]
instance_tags = ["demo-test"]
deletion_protection = "false"
metadata = {
windows-startup-script-ps1 = "Set-TimeZone -Id 'GMT Standard Time' -PassThru"
}
ip_address_region = "europe-west2"
ip_address_type = "INTERNAL"
attached_disks = {
data = {
size = 60
type = "pd-standard"
}
}
/*/ instance_schedule_policy = {
name = "start-stop"
#region = "europe-west2"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
*/
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
terragrunt validate-inputs result below
PS C:\Cloudrepos\placesforpeople\org\rnd> terragrunt validate-inputs
time=2022-07-27T14:25:19+01:00 level=warning msg=The following inputs passed in by terragrunt are unused:
prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - billing_account prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - host_project_id prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=info msg=All required inputs are passed in by terragrunt. prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=error msg=Terragrunt configuration has misaligned inputs
time=2022-07-27T14:25:19+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople\org\rnd>
I have found the culprit!
In the compute instance module i discovered this block of code. I removed labels and voila the extra labels now appear. Thanks for the assistance and advice on post formatting.
lifecycle {
ignore_changes = [
boot_disk.0.initialize_params.0.image,
attached_disk, labels
]
}
I have following CI configurations:
...
cache:
key: ${CI_PROJECT_NAME}
paths:
- ${TF_ROOT}/.terraform
before_script:
- echo -e "credentials \"$CI_SERVER_HOST\" {\n token = \"$CI_JOB_TOKEN\"\n}" > $TF_CLI_CONFIG_FILE
- cd ${TF_ROOT}
- export TF_LOG_CORE=TRACE
- export TF_LOG_PATH=terraform_logs.txt
stages:
- initialize
- validate
init:
stage: initialize
script:
- terraform -v
- terraform init
#- terraform validate
validate:
stage: validate
script:
- terraform validate
My init runs totally fine however i get following in the next stage i.e. validate:
$ terraform validate
╷
│ Error: Missing required provider
│
│ This configuration requires provider registry.terraform.io/datadog/datadog,
│ but that provider isn't available. You may be able to install it
│ automatically by running:
│ terraform init
in provider.tf:
terraform {
required_version = ">= 0.14"
required_providers {
datadog = {
source = "DataDog/datadog"
version = "2.24.0"
}
}
}
in config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "some rummer"
url = "****
token = "***"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
If run the validate as subsequent command in the init stage itself if works fine, but just not in the different stage.
If i do ls -al in the next stage before validate, i can even see .terraform folder present which should be having providers inside?
Second guess was a caching issue, however I believe I have specified caches correctly - ${TF_ROOT}/.terraform?
I am running the gitlab-runner as shell executor.
Any idea what is wrong here?
I have a simple aws_organizations_organization data source
data "aws_organizations_organization" "my_org" {
name = "my_org"
}
I am trying to import the datasource into my state.
Expected
run terraform import data.aws_organizations_organization my_org. Then the datasource is imported properly.
Actual
run terraform import data.aws_organizations_organization my_org. Then I get the error
Error: Invalid address
│
│ on <import-address> line 1:
│ 1: data.aws_organizations_organization
│
│ Resource specification must include a resource type and name.
Can someone explain to me what is wrong with this command? Thank you.
when I am running "terraform plan" I am getting this error
Error: error setting up new vSphere SOAP client: Post dial tcp: i/o timeout
on modules/control_plane_resources/main.tf line 2, in provider "vsphere":
2: provider "vsphere" {
The issue is most likely, the url you've provided to your Vsphere Client is incorrect. I had the exact same issue and that was the cause.
for example my provider.tf file looked something like this:
provider "vsphere" {
user = var.vsphere_user
password = var.vsphere_password
vsphere_server = var.vsphere_server_uri
# If you have a self-signed cert
allow_unverified_ssl = true
}
and my tfvars file had this value:
vsphere_server_uri = "vra#domain.local"
and it should have been this instead:
vsphere_server_uri = "vsphere#domain.local"