I am running into an issue with a terraform provider, the new relic plugin keeps crashing for some reason and I don't know why. I'm trying to build a simple alerting script on terraform to create an alerting policy + conditions on the new relic UI. Here is the code below that I'm trying to run;
`terraform {
required_version = "~> 1.3.7"
required_providers{
newrelic = {
source = "newrelic/newrelic"
version = "~> 3.13"
}
}
}
locals{
splitList = [for url in var.urlList: split(".", url)[1]]
finishedList = [for split in local.splitList: join("-", [split, "Cert Check"])]
}
resource "newrelic_alert_policy" "certChecks" {
name = "SSL Cert Check Expirations"
incident_preference = "PER_POLICY"
}
resource "newrelic_alert_channel" "SSL_Alert" {
name = "SSL Expiration Alert"
type = "email"
config {
recipients = "foo.com"
include_json_attachment = "true"
}
}
resource "newrelic_synthetics_alert_condition" "foo" {
policy_id = newrelic_alert_policy.certChecks.id
count = length(var.urlList)
name = "SSL Expiration"
monitor_id = local.finishedList[count.index]
}
resource "newrelic_synthetics_cert_check_monitor" "monitor"{
count = length(var.urlList)
name = local.finishedList[count.index]
domain = var.urlList[count.index]
locations_public = ["US_EAST_1"]
certificate_expiration = "350"
period = "EVERY_DAY"
status = "ENABLED"
}`
It plans but won't apply, it errors out right before. Here is my error message:
Any help would be useful, thank you!
Honestly much hasn't been tried, I tried looking for more information on the terraform community but that search pulled up no results. The only thing I found was changing the location the test would be running from, but I was already in the location needed.
Related
I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.
I have deployed a cloud run application for currently two domains with a load balancer, which is already running. Now this setup needs to be rolled out to other domains. Because the resource setup is always the same, I face some issues:
I want to prevent repeating code (which is managed through a for_each)
Still there are some domain-specific values to cover, which i tried through a mapping table
Referencing resources, which are created with for_each in another resource
The first issue I solved like this, which seems to work:
Old:
resource "google_cloud_run_service" "cr_domain1" {
name = "cr-domain1"
location = "europe-west6"
project = "my_project"
template {
...
}
}
resource "google_cloud_run_service" "cr_domain2" {
name = "cr-domain2"
location = "europe-southwest1"
project = "my_project"
template {
...
}
}
New:
resource "google_cloud_run_service" "cr" {
for_each = toset( ["domain1", "domain2"] )
name = "cr-${each_key}"
location = "tdb" # This is my second issue
project = "my_project"
template {
...
}
}
Regarding second issue I still need domain-specific location setup, which I tried to solve like this, but I am getting errors:
variable "cr_location" {
type = list(object({
domain1 = string
domain2 = string
}))
default = [{
domain1 = "europe-west6"
domain2 = "europe-southwest1"
}]
}
resource "google_cloud_run_service" "cr" {
for_each = toset( ["domain1", "domain2"] )
name = "cr-${each_key}"
location = "${var.cr_location[0]}.${each.key}"
project = "my_project"
template {
...
}
}
Error is "Cannot include the given value in a string template: string required". But I have already declared it as a string in my variable "cr_location". Any idea what's the issue here? The expected output should be:
location = "europe-west6" # For domain1
location = "europe-southwest1" # For domain2
Also regarding issue 3 I do not understand how to referencing resources, which are created with for_each in another resource. So before my for_each in the cloud run resource block (see issue 1) I had this 2 resources:
resource "google_cloud_run_service" "cr_domain1"
resource "google_cloud_run_service" "cr_domain2"
Now I only have resource "google_cloud_run_service" "cr". But in my loadbalancer.tf I still have to references to the old namings (last coderow within "service"):
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = toset( ["domain1", "domain2"] )
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = "${var.cr_location[0]}.${each.key}" # Here same issues as issue 2
cloud_run {
service = google_cloud_run_service.cr_domain1.name # Old reference
}
}
So if there is no "cr_domain1" anymore how do I reference to this resource? My issue is that I have to create over 20 resources like that and I couldn't figure it out how to do it. I appreciate any guideline here.
What I would suggest here is to try and refactor the variable because it is making a lot of things harder than they should be. So I would go for this kind of a variable definition:
variable "cr_location" {
type = map(string)
default = {
domain1 = "europe-west6"
domain2 = "europe-southwest1"
}
}
Then, the rest should be easy to create:
resource "google_cloud_run_service" "cr" {
for_each = var.cr_location
name = "cr-${each.key}"
location = each.value
project = "my_project"
template {
...
}
}
And for the network endpoint resource:
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = var.cr_location
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = each.value
cloud_run {
service = google_cloud_run_service.cr[each.key].name
}
}
You could even try resource chaining with for_each [1] to make sure you are doing this for all the Cloud Run resources created:
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = google_cloud_run_service.cr
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = each.value.location
cloud_run {
service = each.value.name
}
}
[1] https://www.terraform.io/language/meta-arguments/for_each#chaining-for_each-between-resources
I'm new to Terraform and I have an issue I can't seem to find a solution on.
I am using the Oneview provider to connect to two Oneview instances. On each one, I am configuring an NTP server (which is the Oneview IP; this is for testing). My (currently functional) provider code looks like this:
terraform {
required_providers {
oneview = {
source = "HewlettPackard/oneview"
version = "6.5.0-13"
}
}
}
provider "oneview" { #These can be replaced with the variables in the variables.tf file
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.10/"
ov_sslverify = false
ov_apiversion = 2400
ov_domain = "local"
ov_ifmatch = "*"
}
provider "oneview" {
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.50/"
ov_sslverify = false
ov_apiversion = 3200
ov_domain = "local"
ov_ifmatch = "*"
alias = "houston2"
}
and I have the resources in another file:
data "oneview_appliance_time_and_locale" "timelocale" {
}
output "locale_value" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.10"]
}
data "oneview_appliance_time_and_locale" "timelocale2" {
}
output "locale_value2" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale2" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = oneview.houston2
}
What I'd like to do is set it up in a way that I can do some sort of "for each provider, run the resource with the correct ntp_server variable", instead of writing a resource for every provider. So for each loop of the resource, it would use the right provider and also grab the right variable for the ntp server.
From what I've read, Terraform doesn't really use traditional for_each statements in a way that I'm used to, and I'm kind of stumped as to how to accomplish this. Does anyone have any suggestions?
Thank you very much for all your help!
resource "oneview_appliance_time_and_locale" "timelocale2" {
for_each = var.provider_list // List contain provider and its alias
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = each.alias
}
Can we try this way, loop through the provider list.. Terraform is supporting the same.
Is there any way to create GCP alerting policy for uptime check using terraform and filter value of metric.label.check_id of already deployed resource?
Provided examples in the terraform docs show only alerting policy for metrics not for uptime check for already deployed resource so I’m not sure if that is even possible with the terraform.
I have figure out a solution which works in my case.
I have create uptime check and uptime check alert by two separate terraform modules.
Terrraform uptime check module looks like:
resource "google_monitoring_uptime_check_config" "uptime-check" {
project = var.project_id
display_name = var.display_name
timeout = "10s"
period = "60s"
http_check {
path = var.path
port = var.port
use_ssl = true
validate_ssl = true
}
monitored_resource {
type = "uptime_url"
labels = {
host = var.hostname,
project_id = var.project_id
}
}
content_matchers {
content = "\"status\":\"UP\""
}
}
Then for the outputs.tf for that module I have:
output "uptime_check_id" {
value = google_monitoring_uptime_check_config.uptime-check.uptime_check_id
}
Then in the alerts module I have follow terraform docs but modified them to code which looks like:
module "medallies-common-alerts" {
source = "./modules/alerts"
project_id = var.project_id
uptime_check_depends_on = [module.uptime-check]
check_id = module.uptime-check.uptime_check_id
}
...
resource "google_monitoring_alert_policy" "alert_policy_uptime_check" {
project = var.project_id
enabled = true
depends_on = [var.uptime_check_depends_on]
....
condition_threshold {
filter = format("metric.type=\"monitoring.googleapis.com/uptime_check/check_passed\" AND metric.label.\"check_id\"=\"%s\" AND resource.type=\"uptime_url\"",var.check_id)
duration = "300s"
comparison = "COMPARISON_GT"
threshold_value = "1"
trigger {
count = 1
}
...
}
Hope it will help someone too.
I have created main.tf file as below for Mongodb terraform module.
resource "mongodbatlas_teams" "test" {
org_id = null
name = "MVPAdmin_Team"
usernames = ["user1#email.com", "user2#email.com", "user3#email.com"]
}
resource "mongodbatlas_project" "test" {
name = "MVP_Project"
org_id = null
teams {
team_id = null
role_names = ["GROUP_CLUSTER_MANAGER"]
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = null
ip_address = null
comment = "IP address for MVP Dev cluster testing"
}
resource "mongodbatlas_cluster" "test" {
name = "MVP_DevCluster"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
cluster_type = REPLICASET
state_name = var.state_name
replication specs {
num_shards= var.num_shards
region_config {
region_name = "AU-EA"
electable_nodes = var.electable_nodes
priority = var.priority
read_only_nodes = var.read_only_nodes
}
}
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
mongo_db_major_version = var.mongo_db_major_version
provider_name = "Azure"
provider_disk_type_name = var.provider_disk_type_name
provider_instance_size_name = var.provider_instance_size_name
mongodbatlas_database_user {
username = var.username
password = var.password
auth_database_name = var.auth_database_name
role_name = var.role_name
database_name = var.database_name
}
mongodbatlas_database_snapshot_backup_policy {
policy_item = var.policy_item
frequency_type = var.frequency_type
retention_value = var.retention_value
}
advanced_configuration {
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
connection_string = var.connection_string
}
}
However, terraform init reports as below:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mongodbatlas: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/mongodbatlas
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use mongodb/mongodbatlas? If so, you must specify that
source address in each module which requires that provider. To see which
modules are currently depending on hashicorp/mongodbatlas, run the following
command:
terraform providers
Any idea as to what is going wrong?
The error message explains the most likely reason for seeing this error message: you've upgraded directly from Terraform v0.12 to Terraform v0.14 without running through the Terraform v0.13 upgrade steps.
If you upgrade to Terraform v0.13 first and follow those instructions then the upgrade tool should be able to give more specific instructions on what to change here, and may even be able to automatically upgrade your configuration for you.
However, if you wish then you can alternatively manually add the configuration block that the v0.13 upgrade tool would've inserted, to specify that you intend to use the mongodb/mongodbatlas provider as "mongodbatlas" in this module:
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
}
There are some other considerations in the v0.13 upgrade guide that the above doesn't address, so you may still need to perform the steps described in that upgrade guide if you see different error messages after trying what I showed above.