Using multiple providers with one resource in Terraform - terraform

I'm new to Terraform and I have an issue I can't seem to find a solution on.
I am using the Oneview provider to connect to two Oneview instances. On each one, I am configuring an NTP server (which is the Oneview IP; this is for testing). My (currently functional) provider code looks like this:
terraform {
required_providers {
oneview = {
source = "HewlettPackard/oneview"
version = "6.5.0-13"
}
}
}
provider "oneview" { #These can be replaced with the variables in the variables.tf file
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.10/"
ov_sslverify = false
ov_apiversion = 2400
ov_domain = "local"
ov_ifmatch = "*"
}
provider "oneview" {
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.50/"
ov_sslverify = false
ov_apiversion = 3200
ov_domain = "local"
ov_ifmatch = "*"
alias = "houston2"
}
and I have the resources in another file:
data "oneview_appliance_time_and_locale" "timelocale" {
}
output "locale_value" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.10"]
}
data "oneview_appliance_time_and_locale" "timelocale2" {
}
output "locale_value2" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale2" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = oneview.houston2
}
What I'd like to do is set it up in a way that I can do some sort of "for each provider, run the resource with the correct ntp_server variable", instead of writing a resource for every provider. So for each loop of the resource, it would use the right provider and also grab the right variable for the ntp server.
From what I've read, Terraform doesn't really use traditional for_each statements in a way that I'm used to, and I'm kind of stumped as to how to accomplish this. Does anyone have any suggestions?
Thank you very much for all your help!

resource "oneview_appliance_time_and_locale" "timelocale2" {
for_each = var.provider_list // List contain provider and its alias
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = each.alias
}
Can we try this way, loop through the provider list.. Terraform is supporting the same.

Related

3.13.0 New Relic Provider Crashing on Terraform

I am running into an issue with a terraform provider, the new relic plugin keeps crashing for some reason and I don't know why. I'm trying to build a simple alerting script on terraform to create an alerting policy + conditions on the new relic UI. Here is the code below that I'm trying to run;
`terraform {
required_version = "~> 1.3.7"
required_providers{
newrelic = {
source = "newrelic/newrelic"
version = "~> 3.13"
}
}
}
locals{
splitList = [for url in var.urlList: split(".", url)[1]]
finishedList = [for split in local.splitList: join("-", [split, "Cert Check"])]
}
resource "newrelic_alert_policy" "certChecks" {
name = "SSL Cert Check Expirations"
incident_preference = "PER_POLICY"
}
resource "newrelic_alert_channel" "SSL_Alert" {
name = "SSL Expiration Alert"
type = "email"
config {
recipients = "foo.com"
include_json_attachment = "true"
}
}
resource "newrelic_synthetics_alert_condition" "foo" {
policy_id = newrelic_alert_policy.certChecks.id
count = length(var.urlList)
name = "SSL Expiration"
monitor_id = local.finishedList[count.index]
}
resource "newrelic_synthetics_cert_check_monitor" "monitor"{
count = length(var.urlList)
name = local.finishedList[count.index]
domain = var.urlList[count.index]
locations_public = ["US_EAST_1"]
certificate_expiration = "350"
period = "EVERY_DAY"
status = "ENABLED"
}`
It plans but won't apply, it errors out right before. Here is my error message:
Any help would be useful, thank you!
Honestly much hasn't been tried, I tried looking for more information on the terraform community but that search pulled up no results. The only thing I found was changing the location the test would be running from, but I was already in the location needed.

How do I pass a data source value to a .tfvars file value?

I'm trying to create a secret on GCP's Secret Manager.
The secret value is coming from Vault (HCP Cloud).
How can I pass a value of the secret if I'm using a .tfvars file for the values?
Creating the secret without .tfvars works. Other suggestions rather than data source are welcomed as well. I saw that referring locals isn't possible as well inside tfvars.
vault.tf:
provider "vault" {
address = "https://testing-vault-public-vault-numbers.numbers.z1.hashicorp.cloud:8200"
token = "someToken"
}
data "vault_generic_secret" "secrets" {
path = "secrets/terraform/cloudcomposer/kafka/"
}
main.tf:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(var.connections)
secret_id = "${var.secret_manager_prefix}-${var.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(var.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = var.connections[count.index].uri
}
dev.tfvars:
image_version = "composer-2-airflow-2.1.4"
env_size = "LARGE"
env_name = "development"
region = "us-central1"
network = "development-main"
subnetwork = "development-subnet1"
secret_manager_prefix = "test"
connections = [
{ name = "postgres", uri = "postgresql://postgres_user:XXXXXXXXXXXX#1.1.1.1:5432/"}, ## This one works
{ name = "kafka", uri = "${data.vault_generic_secret.secrets.data["kafka_dev_password"]}"
]
Getting:
Error: Invalid expression
on ./tfvars/dev.tfvars line 39:
Expected the start of an expression, but found an invalid expression token.
Thanks in advance.
Values in the tfvars files have to be static, i.e., they cannot use any kind of a dynamic assignment like when using data sources. However, in that case, using local variables [1] should be a viable solution:
locals {
connections = [
{
name = "kafka",
uri = data.vault_generic_secret.secrets.data["kafka_dev_password"]
}
]
}
Then, in the resource you need to use it in:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(local.connections)
secret_id = "${var.secret_manager_prefix}-${local.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(local.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = local.connections[count.index].uri
}
[1] https://developer.hashicorp.com/terraform/language/values/locals

Create GCP alerting policy for uptime check using terraform

Is there any way to create GCP alerting policy for uptime check using terraform and filter value of metric.label.check_id of already deployed resource?
Provided examples in the terraform docs show only alerting policy for metrics not for uptime check for already deployed resource so I’m not sure if that is even possible with the terraform.
I have figure out a solution which works in my case.
I have create uptime check and uptime check alert by two separate terraform modules.
Terrraform uptime check module looks like:
resource "google_monitoring_uptime_check_config" "uptime-check" {
project = var.project_id
display_name = var.display_name
timeout = "10s"
period = "60s"
http_check {
path = var.path
port = var.port
use_ssl = true
validate_ssl = true
}
monitored_resource {
type = "uptime_url"
labels = {
host = var.hostname,
project_id = var.project_id
}
}
content_matchers {
content = "\"status\":\"UP\""
}
}
Then for the outputs.tf for that module I have:
output "uptime_check_id" {
value = google_monitoring_uptime_check_config.uptime-check.uptime_check_id
}
Then in the alerts module I have follow terraform docs but modified them to code which looks like:
module "medallies-common-alerts" {
source = "./modules/alerts"
project_id = var.project_id
uptime_check_depends_on = [module.uptime-check]
check_id = module.uptime-check.uptime_check_id
}
...
resource "google_monitoring_alert_policy" "alert_policy_uptime_check" {
project = var.project_id
enabled = true
depends_on = [var.uptime_check_depends_on]
....
condition_threshold {
filter = format("metric.type=\"monitoring.googleapis.com/uptime_check/check_passed\" AND metric.label.\"check_id\"=\"%s\" AND resource.type=\"uptime_url\"",var.check_id)
duration = "300s"
comparison = "COMPARISON_GT"
threshold_value = "1"
trigger {
count = 1
}
...
}
Hope it will help someone too.

Default DNS records in every zone managed via terraform (eg. MX records)

I'm looking for a way to manage cloudflare zones and records with terraform and create some default records (eg. MX) in every zone that is managed via terraform, something like this:
resource "cloudflare_zone" "example_net" {
type = "full"
zone = "example.net"
}
resource "cloudflare_zone" "example_com" {
type = "full"
zone = "example.com"
}
resource "cloudflare_record" "mxrecord"{
for_each=cloudflare_zone.*
name = "${each.value.zone}"
priority = "1"
proxied = "false"
ttl = "1"
type = "MX"
value = "mail.foo.bar"
zone_id = each.value.id
}
Does anyone have a clue for me how to achieve this (and if this is even possible...)?
Thanks a lot!
You could create a module responsible for the zone resource, e.g.:
# modules/cf_zone/main.tf
resource "cloudflare_zone" "cf_zone" {
type = "full"
zone = var.zone_name
}
resource "cloudflare_record" "mxrecord"{
name = "${cloudflare_zone.cf_zone.name}"
priority = "1"
proxied = "false"
ttl = "1"
type = "MX"
value = "mail.foo.bar"
zone_id = "${cloudflare_zone.cf_zone.id}"
}
# main.tf
module "example_net" {
source = "./modules/cf_zone"
zone_name = "example_net"
}
module "example_com" {
source = "./modules/cf_zone"
zone_name = "example_com"
}
This would give you an advantage on creation of default resources and settings per zone (DNS entries, security settings, page rules, etc.). It is also a good way to keep all the default values in a single place for review.
You can ready more about terraform modules here.
This is easy to do if you use a module, as was correctly noted in the other answer, but you don't have to create one, you can use this module.
Then your configuration will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
description = "The Cloudflare API token."
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
locals {
domains = [
"example.com",
"example.net"
]
mx = "mail.foo.bar"
}
module "domains" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.8.0"
for_each = toset(local.domains)
zone = each.value
records = [
{
record_name = "mx_1"
type = "MX"
value = local.mx
priority = 1
}
]
}
You can find an example of using this module that matches your question here.

Terraform Resource does not have attribute for variable

Running Terraform 0.11.7 and getting the following error:
module.frontend_cfg.var.web_acl: Resource 'data.terraform_remote_state.waf' does not have attribute 'waf_nonprod_id' for variable 'data.terraform_remote_state.waf.waf_nonprod_id'
Below is the terraform file:
module "frontend_cfg"
{
source = "../../../../modules/s3_fe/developers"
region = "us-east-1"
dev_shortname = "cfg"
web_acl = "${data.terraform_remote_state.waf.waf_nonprod_id}"
}
data "terraform_remote_state" "waf" {
backend = "local"
config = {
name = "../../../global/waf/terraform.tfstate"
}
}
The file which creates the tfstate file referenced above is below. This file has had no issues building.
resource "aws_waf_web_acl" "waf_fe_nonprod"
{
name = "fe_nonprod_waf"
metric_name = "fenonprodwaf"
default_action
{
type = "ALLOW"
}
}
output waf_nonprod_id
{
value = "${aws_waf_web_acl.waf_fe_nonprod.id}"
}
I will spare the full output of the cloudfront file, however, the following covers the text:
resource "aws_cloudfront_distribution" "fe_distribution"
{
web_acl_id = "${var.web_acl}"
}
If I put the ID of the waf ID into the web_acl variable, it works just fine, so I suspect the issue is something to do with the way I am calling data. This appears to match documentation though.
Use path instead of name in terraform_remote_state,
https://www.terraform.io/docs/backends/types/local.html
data "terraform_remote_state" "waf" {
backend = "local"
config = {
path = "../../../global/waf/terraform.tfstate"
}
}
or
data "terraform_remote_state" "waf" {
backend = "local"
config = {
path = "${path.module}/../../../global/waf/terraform.tfstate"
}
}
I tested it with terraform version 0.11.7 and 0.11.14
If you upgrade terraform to version 0.12.x, syntax using remote_state ouput has changed.
So change
web_acl = "${data.terraform_remote_state.waf.waf_nonprod_id}"
to
web_acl = data.terraform_remote_state.waf.outputs.waf_nonprod_id
or
web_acl = "${data.terraform_remote_state.waf.outputs.waf_nonprod_id}"

Resources