Terraform Cloud Run Service URL - terraform

I create a cloud run service like so
terraform {
required_version = ">= 1.1.2"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.1.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.2.0"
}
}
}
provider "google" {
project = "main_project"
region = "us-central-1"
credentials = "<my-key-path>"
}
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "us-docker.pkg.dev/cloudrun/container/hello"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
I want to save the value of the service url that is created - https://default-hml2qtrgfq-uw.a.run.app using an output variable. something like
output "cloud_run_instance_url" {
value = google_cloud_run_service.default.url
}
This gives me an error:
terraform plan
╷
│ Error: Unsupported attribute
│
│ on main.tf line 40, in output "cloud_run_instance_url":
│ 40: value = google_cloud_run_service.default.url
│
│ This object has no argument, nested block, or exported attribute named "url".
╵
How do I get this output value and assign it to a variable so that other services like cloud scheduler can point to it?

If you declare an output for the url resource attribute like:
output "cloud_run_instance_url" {
value = google_cloud_run_service.default.status.0.url
}
then it will be available for resolution (such as for inputs to other modules) at the scope where the module is declared in the namespace module.<declared module name>.cloud_run_instance_url. For example, if this module is declared in the root module config, then it can be resolved at that namespace elsewhere in the root module config.

Related

Why am I getting 'Unsupported argument errors' in my main.tf file?

I have a main.tf file with the following code block:
module "login_service" {
source = "/path/to/module"
name = var.name
image = "python:${var.image_version}"
port = var.port
command = var.command
}
# Other stuff below
I've defined a variables.tf file as follows:
variable "name" {
type = string
default = "login-service"
description = "Name of the login service module"
}
variable "command" {
type = list(string)
default = ["python", "-m", "LoginService"]
description = "Command to run the LoginService module"
}
variable "port" {
type = number
default = 8000
description = "Port number used by the LoginService module"
}
variable "image" {
type = string
default = "python:3.10-alpine"
description = "Image used to run the LoginService module"
}
Unfortunately, I keep getting this error when running terraform plan.
Error: Unsupported argument
│
│ on main.tf line 4, in module "login_service":
│ 4: name = var.name
│
│ An argument named "name" is not expected here.
This error repeats for the other variables. I've done a bit of research and read the terraform documentation on variables, and read other stack overflow answers but I haven't really found a good answer to the problem.
Any help appreciated.
A Terraform module block is only for referring to a Terraform module. It doesn't support any other kind of module. Terraform modules are a means for reusing Terraform declarations across many configurations, but th
Therefore in order for this to be valid you must have at least one .tf file in /path/to/module that declares the variables that you are trying to pass into the module.
From what you've said it seems like there's a missing step in your design: you are trying to declare something in Kubernetes using Terraform, but the configuration you've shown here doesn't include anything which would tell Terraform to interact with Kubernetes.
A typical way to manage Kubernetes objects with Terraform is using the hashicorp/kubernetes provider. A Terraform configuration using that provider would include a declaration of the dependency on that provider, the configuration for that provider, and at least one resource block declaring something that should exist in your Kubernetes cluster:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
}
}
provider "kubernetes" {
host = "https://example.com/" # URL of your Kubernetes API
# ...
}
# For example only, a kubernetes_deployment resource
# that declares one Kubernetes deployment.
# In practice you can use any resource type from this
# provider, depending on what you want to declare.
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 3
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
Although you can arrange resources into separate modules in Terraform if you wish, I would suggest focusing on learning to directly describe resources in Terraform first and then once you are confident with that you can learn about techniques for code reuse using Terraform modules.

Error: Inconsistent dependency lock file when extract resource into a module

I am new to terraform and as I extracted one of the resources into a module I got this:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current
│ configuration:
│ - provider registry.terraform.io/hashicorp/heroku: required by this configuration but no version is selected
│
│ To update the locked dependency selections to match a changed configuration, run:
│ terraform init -upgrade
How I did?
First I had this:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
resource "heroku_addon" "redis" {
app = heroku_app.example.id
plan = "rediscloud:30"
}
after that terraform init was runing without error also terraform plan was successfull.
Then I extracted the redis resource declaration into a module:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
And the content of modules/key-value-store/main.tf is this:
terraform {
required_providers {
mycloud = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
terraform get went well. but terraform plan showed me the above error!
For this code to work, you have to have the required_providers blocks in both the root and child modules. So, the following needs to happen:
Add the required_providers block to the root module (this is what you have already)
Add the required_providers block to the child module and name it properly (currently you have set it to mycloud, and provider "heroku" {} block is missing)
The code that needs to be added in the root module is:
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
In the child module (i.e., ./modules/key-value-store) the following needs to be present:
terraform {
required_providers {
heroku = { ### not mycloud
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {} ### this was missing as well
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
This stopped working when the second resource was moved to the module as Heroku is not an official Terraform provider hence the provider settings are not propagated to the modules. For the unofficial providers (e.g., marked with verified), corresponding blocks of required_providers and provider <name> {} have to be defined. Also, make sure to remove the .terraform directory and re-run terraform init.

Terraform Snowflake module creation impossible - Error: Failed to query available provider packages

Hi I have this working code below.
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.25.36"
}
}
}
provider "snowflake" {
alias = "sys_admin"
role = "SYSADMIN"
username = "tf-snow"
private_key = var.SNOWFLAKE_PRIVATE_KEY
region = "ap-southeast-2"
account = "KY88548"
}
resource "snowflake_warehouse" "star_warehouse" {
provider = snowflake.sys_admin
name = "STAR_WAREHOUSE"
warehouse_size = "XSmall"
auto_suspend = 60
}
Note that I have to provide an argument, provider = snowflake.sys_admin or it throws an error.
Now when I am making a module in subfolder, I have this code in the subfolder.
variable "sf_provider" {
type = string
}
resource "snowflake_warehouse" "star_warehouse" {
provider = var.sf_provider
name = "STAR_WAREHOUSE"
warehouse_size = "XSmall"
auto_suspend = 60
}
The code in my root directory looks like this
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.25.36"
}
}
}
provider "snowflake" {
username = "tf-snow"
account = "KY88548"
region = "ap-southeast-2"
alias = "sys_admin"
role = "SYSADMIN"
private_key = var.SNOWFLAKE_PRIVATE_KEY
}
module "snowflake_resources" {
source = "./snowflake_resources"
sf_provider = snowflake.sys_admin
}
This now gives me the following error.
Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/snowflake: provider registry registry.terraform.io does not have
│ a provider named registry.terraform.io/hashicorp/snowflake
│
│ Did you intend to use chanzuckerberg/snowflake? If so, you must specify
│ that source address in each module which requires that provider. To see
│ which modules are currently depending on hashicorp/snowflake, run the
│ following command:
│ terraform providers
is there a way I can create these resources without specifying the provider argument or at least have the option to pass it as an argument to my modules?
From the documentation:
Each Terraform module must declare which providers it requires, so that Terraform can install and use them. Provider requirements are declared in a required_providers block.
So, you need to ensure that required_providers is declared in each module.

Terraform aks module - get cluster name and resource group name via remote state

Hi I am trying to follow this offical guide to manage aks resources. There terraform_remote_state is used to get the resource_group_name and kubernetes_cluster_name.
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "/path/to/base/project/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kubernetes_cluster_name
resource_group_name = data.terraform_remote_state.aks.outputs.resource_group_name
}
I have created the inital aks cluster with the aks module. Looking at its output in the documentation, it doesnt export the resource group name or cluster name.
Now I wonder how I can get the information. I have tried the below in the base project.
module "aks" {
...
}
output "resource_group_name" {
value = module.aks.resource_group_name
}
output "kubernetes_cluster_name" {
value = module.aks.cluster_name
}
But I get erros when trying terraform plan
Error: Unsupported attribute
│
│ on main.tf line 59, in output "resource_group_name":
│ 59: value = module.aks.resource_group_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "resource_group_name".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 63, in output "kubernetes_cluster_name":
│ 63: value = module.aks.cluster_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "cluster_name".
Those are listed under inputs for that module though. Now I dont have an idea know how to get those values from the terraform_remote_state.
As the module itself doesn’t have name and resource group as output , we have to declare outputs there first and then call it while deploying or in remote state as well.
So we have to add 2 outputs in output.tf for aks module after doing terraform init.
output "kubernetes_cluster_name" {
value = azurerm_kubernetes_cluster.main.name
}
output "resource_group_name" {
value = azurerm_kubernetes_cluster.main.resource_group_name
}
Then call outputs in main.tf after defining the modules i.e. network and aks , you can see your Kubernetes cluster name in plan as well and after applying it.
output "kuberneteclustername" {
value = module.aks.kubernetes_cluster_name
}
output "resourcegroupname" {
value = module.aks.resource_group_name
}
Now lets test it from the remote state :
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "path/to/terraform/aksmodule/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kuberneteclustername
resource_group_name = data.terraform_remote_state.aks.outputs.resourcegroupname
}
output "aks" {
value = data.azurerm_kubernetes_cluster.cluster.name
}
output "rg" {
value = data.azurerm_kubernetes_cluster.cluster.resource_group_name
}

Terraform try to pull not defined provider

Every time I perfomr terraform init tf try to pull from registry quite strange provider which do not exit.
Error:
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/databricks: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/databricks
│
│ Did you intend to use databrickslabs/databricks? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/databricks, run the following command:
│ terraform providers
╵
This providere is quite strange combination of 2 providers.
My tf file:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
databrick = {
source = "databrickslabs/databricks"
version = "0.3.7"
}
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
provider "databrick" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "TerraformResourceGroup"
location = "westeurope"
}
resource "azurerm_databricks_workspace" "databrick" {
name = "terraform-databrick"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "trial"
tags = {
"env" = "rnd"
"provisoning" = "tf"
}
}
data "databricks_node_type" "smallest" {
local_disk = true
}
data "databricks_spark_version" "latest_lts" {
long_term_support = true
}
resource "databricks_cluster" "cluster" {
cluster_name = "terraform-cluster"
spark_version = data.databricks_spark_version.latest_lts.id
node_type_id = data.databricks_node_type.smallest.id
autotermination_minutes = 20
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
custom_tags = {
"type" = "SingleNode"
"env" = "rnd"
"provisoning" = "tf"
}
}
I was looking for some kind of 'verbose' flag, so I could find why it is trying to pull this kind of provider and from where it is coming.
Sadly I was able only to be able to find up that this issue is comming from 'data' and below part of my file.
All my knowlage is based on this docs Data brick cluster and this learning material Terraform Azure
Thank you in advace all of your help.

Resources