Module does not support count - terraform

I am trying to create a dynamic resources through a module using count.
However, the usage of count is giving me following error:
Error: Module does not support count
on server_gx_custom_dynamic.tf line 8, in module "server_gx_custom_dynamic":
8: count = len(var.units_wp3)
Module "server_gx_custom_dynamic" cannot be used with count because
it contains a nested provider configuration for "grafana", at
modules server_gx/custom/custom.tf:11,10-19.
This module can be made compatible with count by changing it to receive all of
its provider configurations from the calling module, by using the "providers"
argument in the calling module block.
I did try to add provider configurations in child module also but of no use, same error (of course I didn't do it right). Where can I start with this issue?
Without the introduction of count my modules works perfectly.
`server_gx_custom_dynamic.tf`
module "server_gx_custom_dynamic" {
count = len(var.units_wp3)
source = "./modules/server_gx/custom"
...
}
`./modules/server_gx/custom/custom.tf`
resource "grafana_monitor" "custom_monitor" {
name = ...
...
}

This error message appears because module server_gx_custom_dynamic includes a local provider configuration, rather than just inheriting provider configurations from the calling module.
The error message includes a reference to the location of that provider configuration, which appears as modules server_gx/custom/custom.tf:11,10-19 here. At that location I expect you'll find a provider "grafana" block.
To make this configuration count-compatible you'll need to remove that provider block. If your root module already contains a provider "grafana" block then just removing that nested provider configuration might be sufficient to make it work, because by default Terraform will make a default (unaliased) provider configuration from the root module available to all child modules.
In more complicated situations where you have multiple configurations for the same provider you'll need to use "additional" (aliased) provider configurations. Those are represented as provider blocks containing the special alias argument. Additional provider blocks don't automatically inherit in the way that default provider configurations do, and so you'd need to use an explicit providers argument inside the modules block in that case.
There's more information on this mechanism in the documentation page Providers Within Modules. In particular, your module is currently in the situation described in the section Legacy Shared Modules with Provider Configurations, which includes some additional context about the problem along with some examples on how to address it.

In the module that contains resource "grafana_monitor" "custom_monitor" there should be a provider.tf or even just a provider block of code. Try to remove that completely from the module that contains the resource block and move it into module "server_gx_custom_dynamic". See how that works.
Alternatively you could try to introduce count into the child module that contains resource "grafana_monitor" "custom_monitor". I am not sure if this will work as intended if the provider block is still in that child module but could be something to try out.
When you say you added the provider configuration into the child module are you referring to the directory that has ./modules/server_gx/custom/custom.tf?

Related

What's the common pattern for sensitive attributes of Terraform resource that are only required on creation?

Context: I'm working on a new TF Provider using SDKv2.
I'm adding a new data plane resource which has a very weird API. Namely, there're some sensitive attributes (that are specific to this resource so they can't be set under provider block -- think about DataDog / Slack API secrets that this resource needs to interact with under the hood) I need to pass on creation that are not necessary later on (for example, even for update operation). My minimal code sample:
resource "foo" "bar" {
name = "abc"
sensitive_creds = {
"datadog_api_secret" = "abc..."
// might pass "slack_api_secret" instead
}
...
}
How can I implement it in Terraform to avoid state drifts etc?
So far I can see 3 options:
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user set it to sensitive_creds = {} to avoid a state drift for the next terraform plan run.
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user add ignore_changes = [sensitive_creds] } to their Terraform configuration.
Save "sensitive_creds" to TF state and live with it since users are likely to encrypt TF state anyways.
The most typical compromise is for the provider to still save the user's specified value to the state during create and then to leave it unchanged in the "read" operation that would normally update the state to match the remote system.
The result of this compromise is that Terraform can still detect when the user has intentionally changed the secret value in the configuration, but Terraform will not be able to detect changes made to the value outside of Terraform.
This is essentially your option 3. The Terraform provider protocol requires that the values saved to the state after create exactly match anything the user has specified in the configuration, so your first two options would violate the expected protocol and thus be declared invalid by Terraform Core.
Since you are using SDKv2, you can potentially "get away with it" because Terraform Core permits that older SDK to violate some of the rules as a pragmatic way to deal with the fact that SDKv2 was designed for older versions of Terraform and therefore doesn't implement the type system correctly, but Terraform Core will still emit warnings into its own logs noting that your provider produced an invalid result, and there may be error messages raised against downstream resources if they have configuration derived from the value of your sensitive_creds argument.

Terraform Problem to define cyrilgdn/postgresql provider properly

I have the exact same problem as here Terraform tries to load old defunct provider and the solution posted there does not work for me.
Problem is that i define in the terraform config:
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = ">=1.13.0"
}
}
But the terraform init process always tries to download hashicorp/postgresql and can not find it in the end.
My current terraform version is:
Terraform v1.0.6 on windows_amd64
I did try a lot and played around with the resource parameter "provider" to explicitly set the provider for all resources but even with that i did not find a way.
Can anybody help here again or post me a working example for this provider?
I got the solution! The problem what i had was my folder structure. I had a specific folder structure like:
environments like dev/int/prod and i had a config.tf in there with the required providers.
resources where i use the resources i want to add and what i missed there is the a copy of the config.tf file.
So this means i need a config.tf file in every subfolder which consists modules.

Terraform using both required_providers and provider blocks

I am going through a terraform guide, where the author is spinning up a docker setup using the docker_image and docker_container resources.
In the sample code the main.tf file includes both the required_providers and the provider blocks, as follows:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
provider "docker" {}
Why are they both needed?
Shouldn't terraform be able to understand the need for a docker provider, only by this line?
provider "docker" {}
When considering Terraform providers there are two related notions to think about: the provider itself, and a configuration for the provider.
As an analogy, the provider kreuzwerker/docker here is a bit like a class you're importing from another library, giving it the local name docker. I'll use a pseudo-JavaScript syntax just to make this a bit more concrete:
var docker = require("kreuzwerker/docker");
However, all we have here so far is the class itself. In order to use it we need to create an instance of it, which in Terraform's vernacular is called a "configuration". Again, using pseudo-JavaScript syntax:
var dockerInstance = new docker({});
Terraform's syntax here is decidedly less explicit than this pseudo-JavaScript form, but we can make the distinction more visible by adding a second instance of the provider to the configuration, which in Terraform we do by assigning it a configuration "alias":
provider "docker" {
alias = "example"
host = "ssh://user#remote-host:22"
}
This is like creating a second instance of the provider "class" in our pseudo-JavaScript example:
var dockerInstance2 = new docker({
host: 'ssh://user#remote-host:22'
});
Another variant that shows the distinction is when a module inherits a provider configuration from its calling module. In that case, it's as if the calling module were implicitly passing the provider configuration (instance) into the module, but the child module still needs to import the provider "class" so Terraform can see that we're talking about kreuzwerker/docker as opposed to any other provider that might have the name "docker".
Terraform has some automatic "magic" behaviors that try to make simpler cases implicit, but unfortunately that comes at the cost of making it harder to understand what's going on when things get more complicated. Providers and provider configurations are a particularly hard example of this, because providers have been in the Terraform language for a long time and the current incarnation of the language is trying to stay broadly backward-compatible with the simple uses while still allowing for the newer features like having third-party providers installable from multiple namespaces.
The particularly confusing assumption here is that if you don't declare a particular provider Terraform will create an implicit required_providers declaration assuming that you mean a provider in the hashicorp/ namespace, which makes it seem as though required_providers is only for third-party providers. In fact though, that is largely a backward-compatibility mechanism and so I'd suggest always writing out the required_providers entries, even for the providers in the hashicorp/ namespace, so that less-experienced readers don't need to know about this special backward-compatibility behavior. In your case though, the provider you're using is in a third-party namespace anyway and so the required_providers entry is mandatory.
The source needs to be provided since this isn't one of the "official" HashiCorp providers. There could be multiple providers with the name "docker" in the provider registry, so providing the source is needed in order to tell Terraform exactly which provider to download.

How to re-use a module several times in the same `main.tf`?

I created a module for a re-usable piece of infrastructure. The module is a project, so each time we want to create a new project and the related infrastructure items, we could use this module:
module "project1" {
source = ".modules/project_module"
project_id = "project1"
...
}
module "project2" {
source = ".modules/project_module"
project_id = "project2"
...
}
The module uses the google provider to create resources on GCP.
Unfortunately, this didn't work as hoped. First, each new project requires to invoke terraform init and second, it is impossible to remove a project, because when removing a module from the main.tf file, Terraform complains that without the Google provider it cannot destroy resources. For example:
module.project1.google_storage_bucket_iam_member.some-bucket:
configuration for module.project1.provider.google is not present; a provider configuration block is required for all operations
Is there a way to use several times the same module in the same main.tf? I realize that ideally I should write a provider, but I would like to avoid that for now.
It turned out there was something inconsistent in the state. Burning the end, after re-creating the project from scratch while keeping the Google provider outside of the module, it worked.

How to use multiple Terraform Providers sequentially

How can I get Terraform 0.10.1 to support two different providers without having to run 'terraform init' every time for each provider?
I am trying to use Terraform to
1) Provision an API server with the 'DigitalOcean' provider
2) Subsequently use the 'Docker' provider to spin up my containers
Any suggestions? Do I need to write an orchestrating script that wraps Terraform?
Terraform's current design struggles with creating "multi-layer" architectures in a single configuration, due to the need to pass dynamic settings from one provider to another:
resource "digitalocean_droplet" "example" {
# (settings for a machine running docker)
}
provider "docker" {
host = "tcp://${digitalocean_droplet.example.ipv4_address_private}:2376/"
}
As you saw in the documentation, passing dynamic values into provider configuration doesn't fully work. It does actually partially work if you use it with care, so one way to get this done is to use a config like the above and then solve the "chicken-and-egg" problem by forcing Terraform to create the droplet first:
$ terraform plan -out=tfplan -target=digitalocean_droplet.example
The above will create a plan that only deals with the droplet and any of its dependencies, ignoring the docker resources. Once the Docker droplet is up and running, you can then re-run Terraform as normal to complete the setup, which should then work as expected because the Droplet's ipv4_address_private attribute will then be known. As long as the droplet is never replaced, Terraform can be used as normal after this.
Using -target is fiddly, and so the current recommendation is to split such systems up into multiple configurations, with one for each conceptual "layer". This does, however, require initializing two separate working directories, which you indicated in your question that you didn't want to do. This -target trick allows you to get it done within a single configuration, at the expense of an unconventional workflow to get it initially bootstrapped.
Maybe you can use a provider instance within your resources/module to set up various resources with various providers.
https://www.terraform.io/docs/configuration/providers.html#multiple-provider-instances
The doc talks about multiple instances of same provider but I believe the same should be doable with distinct providers as well.
A little bit late...
Well, got the same Problem. My workaround is to create modules.
First you need a module for your docker Provider with an ip variable:
# File: ./docker/main.tf
variable "ip" {}
provider "docker" {
host = "tcp://${var.ip}:2375/"
}
resource "docker_container" "www" {
provider = "docker"
name = "www"
}
Next one is to load that modul in your root configuration:
# File: .main.tf
module "docker01" {
source = "./docker"
ip = "192.169.10.12"
}
module "docker02" {
source = "./docker"
ip = "192.169.10.12"
}
True, you will create on every node the same container, but in my case that's what i wanted. I currently haven't found a way to configure the hosts with an individual configuration. Maybe nested modules, but that didn't work in the first tries.

Resources