Kubernetes supports dots in metadata label keys (for example app.role), and indeed this seems to be a common convention.
The terraform configuration language (0.12) doesn't support dots in argument names, so labels of this form cannot be specified. For example in a google_container_node_pool configuration, I want to specify this:
resource "google_container_node_pool" "my-node-pool" {
...
labels = {
app.role = web
}
}
Is there a workaround?
note: slashes (/) are quite common in k8s labels as well..
UPDATE: in case anyone stumbles on this same issue down the road, I figured out the root of my issue. I had incorrectly specified the labels argument as a block by omitting the =. So it looked like this:
labels {
"app.role" = "web"
}
This yielded the following error, which pointed me in the wrong direction:
Error: Invalid argument name
on main.tf line 45, in resource "google_container_node_pool" "primary_preemptible_nodes":
45: "app.role" = "web"
Argument names must not be quoted.
I noticed and fixed the missing = but I didn't put it together that map keys have different syntax from argument names.
I verified the suggestion from #ydaetskcoR that wrapping the label in quotes works. Here is the snippet defining the node pool that I created (using Terraform v0.11.13):
resource "google_container_node_pool" "node_pool" {
cluster = "${google_container_cluster.cluster.name}"
zone = "${var.cluster_location}"
initial_node_count = "${var.node_count}"
autoscaling {
min_node_count = 1
max_node_count = 5
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
machine_type = "${var.machine_type}"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
]
metadata {
disable-legacy-endpoints = "true"
}
labels = {
"app.role" = "web"
}
}
}
edit: I also verified that the same works with terraform 0.12.3.
Related
I have a main.tf file with the following code block:
module "login_service" {
source = "/path/to/module"
name = var.name
image = "python:${var.image_version}"
port = var.port
command = var.command
}
# Other stuff below
I've defined a variables.tf file as follows:
variable "name" {
type = string
default = "login-service"
description = "Name of the login service module"
}
variable "command" {
type = list(string)
default = ["python", "-m", "LoginService"]
description = "Command to run the LoginService module"
}
variable "port" {
type = number
default = 8000
description = "Port number used by the LoginService module"
}
variable "image" {
type = string
default = "python:3.10-alpine"
description = "Image used to run the LoginService module"
}
Unfortunately, I keep getting this error when running terraform plan.
Error: Unsupported argument
│
│ on main.tf line 4, in module "login_service":
│ 4: name = var.name
│
│ An argument named "name" is not expected here.
This error repeats for the other variables. I've done a bit of research and read the terraform documentation on variables, and read other stack overflow answers but I haven't really found a good answer to the problem.
Any help appreciated.
A Terraform module block is only for referring to a Terraform module. It doesn't support any other kind of module. Terraform modules are a means for reusing Terraform declarations across many configurations, but th
Therefore in order for this to be valid you must have at least one .tf file in /path/to/module that declares the variables that you are trying to pass into the module.
From what you've said it seems like there's a missing step in your design: you are trying to declare something in Kubernetes using Terraform, but the configuration you've shown here doesn't include anything which would tell Terraform to interact with Kubernetes.
A typical way to manage Kubernetes objects with Terraform is using the hashicorp/kubernetes provider. A Terraform configuration using that provider would include a declaration of the dependency on that provider, the configuration for that provider, and at least one resource block declaring something that should exist in your Kubernetes cluster:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
}
}
provider "kubernetes" {
host = "https://example.com/" # URL of your Kubernetes API
# ...
}
# For example only, a kubernetes_deployment resource
# that declares one Kubernetes deployment.
# In practice you can use any resource type from this
# provider, depending on what you want to declare.
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 3
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
Although you can arrange resources into separate modules in Terraform if you wish, I would suggest focusing on learning to directly describe resources in Terraform first and then once you are confident with that you can learn about techniques for code reuse using Terraform modules.
I'm trying to replicate the nested use of for_each at https://blog.boltops.com/2020/10/06/terraform-hcl-nested-loops/, and am getting an error An argument named "name" is not expected here. right after the first for_each. So I've narrowed it down to the following code:
# Datadog Performance Dashboard
locals {
dashboard_title = "Title here"
hosts = toset( [ "host1", "host2"] )
params = {
"CPU" = [
{
title = "System Load - 1 min avg"
dd_param = "avg:system.load.norm.1"
}
]
"RAM" = [
{
title = "Memory Commit limit"
dd_param = "system.mem.commit_limit"
}
]
}
}
resource "datadog_dashboard" "ordered_dashboard" {
# for_each = local.params
#name = each.key
name = "asdadas"
title = local.dashboard_title
description = "Created using the Datadog provider in Terraform"
layout_type = "ordered"
is_read_only = true
}
Since name = each.key doesn't work (i.e. gives the same error An argument named name is not expected here, one can see I tried commenting out the for_each, the each.key assignment and the rest of the nested loop, and it's still complaining about the variable.
It would make sense that maybe I need to declare it before using it, but I find the code in the link I mentioned earlier doesn't do it. Neither does https://www.terraform.io/language/resources/syntax nor https://www.terraform.io/language/expressions/dynamic-blocks, both from Hashicorps' Terraform documentation site.
This is the version of terraform I'm using:
$ terraform -v
Terraform v1.1.3
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.73.0
Anyone have thoughts on what I'm missing here? Why is terraform complaining on the variable assignment?
It seems to me that the resource datadog_dashboard doesn't have an argument named name, so it gives such an error.
For more information take a look carefully to:
https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/dashboard
I have scale down issue on my GKE cluster and found out with the right configuration I can solve this.
As the terraform documentation I can use the arguement autoscaling_profile and set it to OPTIMIZE_UTILIZATION
Like so :
resource "google_container_cluster" "k8s_cluster" {
[...]
cluster_autoscaling {
enabled = true
autoscaling_profile = "OPTIMIZE_UTILIZATION"
resource_limits {
resource_type = "cpu"
minimum = 1
maximum = 4
}
resource_limits {
resource_type = "memory"
minimum = 4
maximum = 16
}
}
}
But I got this error :
Error: Unsupported argument on modules/gke/main.tf line 70, in resource "google_container_cluster" "k8s_cluster":
70: autoscaling_profile = "OPTIMIZE_UTILIZATION"
An argument named "autoscaling_profile" is not expected here.
I don't get it ?
TL;DR
Add below parameter to the definition of your resource (at the top):
provider = google-beta
More explanation:
autoscaling_profile as shown in the documentation is a beta feature. This means that it will need to use different provider: google-beta.
You can read more about it by following official documentation:
Terraform.io: Using the google beta provider
Focusing on most important parts from above docs:
How to use it:
To use the google-beta provider, simply set the provider field on each resource where you want to use google-beta.
resource "google_compute_instance" "beta-instance" {
provider = google-beta
# ...
}
Disclaimer about usage of google and google-beta:
If the provider field is omitted, Terraform will implicitly use the google provider by default even if you have only defined a google-beta provider block.
Adding to the whole explanation your GKE cluster definition should look like this:
resource "google_container_cluster" "k8s_cluster" {
[...]
provider = google-beta # <- HERE IT IS
cluster_autoscaling {
enabled = true
autoscaling_profile = "OPTIMIZE_UTILIZATION"
resource_limits {
resource_type = "cpu"
minimum = 1
maximum = 4
}
resource_limits {
resource_type = "memory"
minimum = 4
maximum = 16
}
}
}
You will also need to run:
$ terraform init
Terraform Version
Terraform v0.12.1
Terraform Configuration Files
main.tf in my root provider:
provider "google" {}
module "organisation_info" {
source = "../../modules/organisation-info"
top_level_domain = "smoothteam.fi"
region = "us-central1"
}
module "stack_info" {
source = "../../modules/stack-info"
organisation_info = "${module.organisation_info}"
}
Here's module 'organisation-info':
variable "top_level_domain" {}
variable "region" {}
data "google_organization" "organization" {
domain = "${var.top_level_domain}"
}
locals {
organization_id = "${data.google_organization.organization.id}"
ns = "${replace("${var.top_level_domain}", ".", "-")}-"
}
output "organization_id" {
value = "${local.organization_id}"
}
output "ns" {
value = "${local.ns}"
}
Then module 'stack-info':
variable "organisation_info" {
type = any
description = "The organisation-scope this environment exists in."
}
module "project_info" {
source = "../project-info"
organisation_info = "${var.organisation_info}"
name = "${local.project}"
}
locals {
# Use the 'default' workspace for the 'staging' stack.
name = "${terraform.workspace == "default" ? "staging" : terraform.workspace}"
# In the 'production' stack, target the production project. Otherwise, target the staging project.
project = "${local.name == "production" ? "production" : "staging"}"
}
output "project" {
value = "${module.project_info}" # COMMENTING THIS OUTPUT REMOVES THE CYCLE.
}
And finally, the 'project-info' module:
variable "organisation_info" {
type = any
}
variable "name" {}
data "google_project" "project" {
project_id = "${local.project_id}"
}
locals {
project_id = "${var.organisation_info.ns}${var.name}"
}
output "org" {
value = "${var.organisation_info}"
}
Debug Output
After doing terraform destroy -auto-approve, I get:
Error: Cycle: module.stack_info.module.project_info.local.project_id, module.stack_info.output.project, module.stack_info.module.project_info.data.google_project.project (destroy), module.organisation_info.data.google_organization.organization (destroy), module.stack_info.var.organisation_info, module.stack_info.module.project_info.var.organisation_info, module.stack_info.module.project_info.output.org
And terraform graph -verbose -draw-cycles -type=plan-destroy gives me this graph:
Source:
digraph {
compound = "true"
newrank = "true"
subgraph "root" {
"[root] module.organisation_info.data.google_organization.organization" [label = "module.organisation_info.data.google_organization.organization", shape = "box"]
"[root] module.stack_info.module.project_info.data.google_project.project" [label = "module.stack_info.module.project_info.data.google_project.project", shape = "box"]
"[root] provider.google" [label = "provider.google", shape = "diamond"]
"[root] module.organisation_info.data.google_organization.organization" -> "[root] module.stack_info.module.project_info.data.google_project.project"
"[root] module.organisation_info.data.google_organization.organization" -> "[root] provider.google"
"[root] module.stack_info.module.project_info.data.google_project.project" -> "[root] provider.google"
}
}
Expected Behavior
The idea is to use modules at the org, project and stack levels to set up naming conventions that can be re-used across all resources. Organisation-info loads organisation info, project-info about projects, and stack-info determines which project to target based on current workspace.
I have omitted a bunch of other logic in the modules in order to keep them clean for this issue.
According to terraform there are no cycles, and destroy should work fine.
Actual Behavior
We get the cycle I posted above, even though terraform shows no cycles.
Steps to Reproduce
Set up the three modules, organisation-info, project-info, and stack-info as shown above.
Set up a root provider as shown above.
terraform init
terraform destroy (it doesn't seem to matter if you've applied first)
Additional Context
The weird thing is that if I comment out this output in stack-info, the cycle stops:
output "project" {
value = "${module.project_info}" # IT'S THIS... COMMENTING THIS OUT REMOVES THE CYCLE.
}
This seems really weird... I neither understand why outputting a variable should make a difference, nor why I'm getting a cycle error when there's no cycle.
Oddly, terraform plan -destroy does not reveal the cycle, only terraform destroy.
My spidey sense tells me evil is afoot.
Appreciate anyone who can tell me what's going on, whether this is a bug, and perhaps how to work around.
In my case the same cycle error was because the out of 3 key:pair one of the key:pair expected by map type variable inside variables.tf of a module not declared in main.tf nor in anywhere.
I have an ecs_cluster module which defines an ECS cluster. I want the module to be re-usable so I can create various clusters with different configurations. Hence I want to be able to optionally specify whether to create and attach an EBS volume in the launch configuration of the ECS hosts.
I initially tried using count in the ebs_block_device inside the launch configuration e.g.
variable "ebs_volume_device_name" { type = "string", default = "" }
variable "ebs_volume_type" { type = "string", default = "" }
variable "ebs_volume_size" { type = "string", default = "" }
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
ebs_block_device {
count = "${length(var.ebs_volume_device_name) > 0 ? 1 : 0}"
device_name = "${var.ebs_volume_device_name }"
volume_type = "${var.ebs_volume_type}"
volume_size = "${var.ebs_volume_size}"
}
}
However this results in the following error:
module.ecs_cluster.aws_launch_configuration.launch_configuration: ebs_block_device.0: invalid or unknown key: count
I then tried specifying the launch_configuration resource twice, once with and once without the ebs block device e.g.
variable "ebs_volume_device_name" { type = "string", default = "" }
variable "ebs_volume_type" { type = "string", default = "" }
variable "ebs_volume_size" { type = "string", default = "" }
resource "aws_launch_configuration" "launch_configuration" {
count = "${length(var.ebs_volume_device_name) == 0 ? 1 : 0}"
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
# No specification of ebs_block_device
}
resource "aws_launch_configuration" "launch_configuration" {
count = "${length(var.ebs_volume_device_name) > 0 ? 1 : 0}"
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
ebs_block_device {
device_name = "${var.ebs_volume_device_name }"
volume_type = "${var.ebs_volume_type}"
volume_size = "${var.ebs_volume_size}"
}
}
However Terraform then complains because the resource is defined twice.
I can't change the id of either of the resources as I have an auto scaling group which depends upon the name of the launch configuration e.g.
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
}
I guess I could conditionally define 2 autoscaling groups and map one to each launch configuration but this feels really messy. Also these resources themselves have dependent resources such as cloudwatch metric alarms etc. It feels very unDRY to repeat all of this code twice with 2 separate conditions. Am I missing a trick here?
Grateful for any relevant Terraform wisdom!
The count meta-attribute works only on resource-level, unfortunately. Having a conditional block within a resource (such as your ebs_block_device or for example logging or etc) is a problem commonly mentioned in terraform issues in github and as far as I can tell there isn't a solution yet.
In your case a 'trick' could be to have your autoscaling_group.launch_configuration property also have a ternary operator, i.e.
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${length(var.ebs_volume_device_name) == 0 ? aws_launch_configuration.launch_configuration.name : aws_launch_configuration.launch_configuration2.name}"
}
Or better yet extract that logic in a launch_configuration module with an output name and then the above can look like
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${module.launch_config.name}"
}
Not saying it isn't ugly but that's terraform's conditionals for you.
It seems you don't actually need a condition here in aws_launch_configuration resource.
If you are using AWS ECS optimized which is based on the Amazon Linux then it will automatically attach a device volume at /dev/xvdcz with default volume_size=22gb by default.
You can pass the variable to something else(say 50gb) to this variable ${var.ebs_volume_device_name}, if you want to increase or decrease the size of that particular volume depending upon your need.