Terraform AWS managed rules - terraform

Terraform version : 11.11
I am working on creating a custom config rule resource using below code,
however the compliance_resource_types is getting set to
["AWS::EC2::Instance"] instead of ["AWS::EC2::SecurityGroup"].
Appreciate if someone can guide me on how to proceed.
`resource "aws_config_config_rule" "remove_sg_open_to_world" {
name = "security_group_not_open_to_world"
description = "Rule to remove SG ports if open to public"
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "arn:aws:lambda:${var.current_region}:xxxxxxxxx:function:remove_sg_open_to_world"
source_detail {
message_type = "ConfigurationItemChangeNotification"
}
}
scope {
compliance_resource_types = ["AWS::EC2::SecurityGroup"]
}
depends_on = ["aws_config_configuration_recorder.config"]

Related

3.13.0 New Relic Provider Crashing on Terraform

I am running into an issue with a terraform provider, the new relic plugin keeps crashing for some reason and I don't know why. I'm trying to build a simple alerting script on terraform to create an alerting policy + conditions on the new relic UI. Here is the code below that I'm trying to run;
`terraform {
required_version = "~> 1.3.7"
required_providers{
newrelic = {
source = "newrelic/newrelic"
version = "~> 3.13"
}
}
}
locals{
splitList = [for url in var.urlList: split(".", url)[1]]
finishedList = [for split in local.splitList: join("-", [split, "Cert Check"])]
}
resource "newrelic_alert_policy" "certChecks" {
name = "SSL Cert Check Expirations"
incident_preference = "PER_POLICY"
}
resource "newrelic_alert_channel" "SSL_Alert" {
name = "SSL Expiration Alert"
type = "email"
config {
recipients = "foo.com"
include_json_attachment = "true"
}
}
resource "newrelic_synthetics_alert_condition" "foo" {
policy_id = newrelic_alert_policy.certChecks.id
count = length(var.urlList)
name = "SSL Expiration"
monitor_id = local.finishedList[count.index]
}
resource "newrelic_synthetics_cert_check_monitor" "monitor"{
count = length(var.urlList)
name = local.finishedList[count.index]
domain = var.urlList[count.index]
locations_public = ["US_EAST_1"]
certificate_expiration = "350"
period = "EVERY_DAY"
status = "ENABLED"
}`
It plans but won't apply, it errors out right before. Here is my error message:
Any help would be useful, thank you!
Honestly much hasn't been tried, I tried looking for more information on the terraform community but that search pulled up no results. The only thing I found was changing the location the test would be running from, but I was already in the location needed.

Terraform 1.2.0: Referencing resources and object mapping

I have deployed a cloud run application for currently two domains with a load balancer, which is already running. Now this setup needs to be rolled out to other domains. Because the resource setup is always the same, I face some issues:
I want to prevent repeating code (which is managed through a for_each)
Still there are some domain-specific values to cover, which i tried through a mapping table
Referencing resources, which are created with for_each in another resource
The first issue I solved like this, which seems to work:
Old:
resource "google_cloud_run_service" "cr_domain1" {
name = "cr-domain1"
location = "europe-west6"
project = "my_project"
template {
...
}
}
resource "google_cloud_run_service" "cr_domain2" {
name = "cr-domain2"
location = "europe-southwest1"
project = "my_project"
template {
...
}
}
New:
resource "google_cloud_run_service" "cr" {
for_each = toset( ["domain1", "domain2"] )
name = "cr-${each_key}"
location = "tdb" # This is my second issue
project = "my_project"
template {
...
}
}
Regarding second issue I still need domain-specific location setup, which I tried to solve like this, but I am getting errors:
variable "cr_location" {
type = list(object({
domain1 = string
domain2 = string
}))
default = [{
domain1 = "europe-west6"
domain2 = "europe-southwest1"
}]
}
resource "google_cloud_run_service" "cr" {
for_each = toset( ["domain1", "domain2"] )
name = "cr-${each_key}"
location = "${var.cr_location[0]}.${each.key}"
project = "my_project"
template {
...
}
}
Error is "Cannot include the given value in a string template: string required". But I have already declared it as a string in my variable "cr_location". Any idea what's the issue here? The expected output should be:
location = "europe-west6" # For domain1
location = "europe-southwest1" # For domain2
Also regarding issue 3 I do not understand how to referencing resources, which are created with for_each in another resource. So before my for_each in the cloud run resource block (see issue 1) I had this 2 resources:
resource "google_cloud_run_service" "cr_domain1"
resource "google_cloud_run_service" "cr_domain2"
Now I only have resource "google_cloud_run_service" "cr". But in my loadbalancer.tf I still have to references to the old namings (last coderow within "service"):
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = toset( ["domain1", "domain2"] )
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = "${var.cr_location[0]}.${each.key}" # Here same issues as issue 2
cloud_run {
service = google_cloud_run_service.cr_domain1.name # Old reference
}
}
So if there is no "cr_domain1" anymore how do I reference to this resource? My issue is that I have to create over 20 resources like that and I couldn't figure it out how to do it. I appreciate any guideline here.
What I would suggest here is to try and refactor the variable because it is making a lot of things harder than they should be. So I would go for this kind of a variable definition:
variable "cr_location" {
type = map(string)
default = {
domain1 = "europe-west6"
domain2 = "europe-southwest1"
}
}
Then, the rest should be easy to create:
resource "google_cloud_run_service" "cr" {
for_each = var.cr_location
name = "cr-${each.key}"
location = each.value
project = "my_project"
template {
...
}
}
And for the network endpoint resource:
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = var.cr_location
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = each.value
cloud_run {
service = google_cloud_run_service.cr[each.key].name
}
}
You could even try resource chaining with for_each [1] to make sure you are doing this for all the Cloud Run resources created:
resource "google_compute_region_network_endpoint_group" "backendneg" {
for_each = google_cloud_run_service.cr
name = "backendneg-${each.key}"
project = "my_project"
network_endpoint_type = "SERVERLESS"
region = each.value.location
cloud_run {
service = each.value.name
}
}
[1] https://www.terraform.io/language/meta-arguments/for_each#chaining-for_each-between-resources

Using multiple providers with one resource in Terraform

I'm new to Terraform and I have an issue I can't seem to find a solution on.
I am using the Oneview provider to connect to two Oneview instances. On each one, I am configuring an NTP server (which is the Oneview IP; this is for testing). My (currently functional) provider code looks like this:
terraform {
required_providers {
oneview = {
source = "HewlettPackard/oneview"
version = "6.5.0-13"
}
}
}
provider "oneview" { #These can be replaced with the variables in the variables.tf file
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.10/"
ov_sslverify = false
ov_apiversion = 2400
ov_domain = "local"
ov_ifmatch = "*"
}
provider "oneview" {
ov_username = "administrator"
ov_password = "mypassword"
ov_endpoint = "https://10.50.0.50/"
ov_sslverify = false
ov_apiversion = 3200
ov_domain = "local"
ov_ifmatch = "*"
alias = "houston2"
}
and I have the resources in another file:
data "oneview_appliance_time_and_locale" "timelocale" {
}
output "locale_value" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.10"]
}
data "oneview_appliance_time_and_locale" "timelocale2" {
}
output "locale_value2" {
value = data.oneview_appliance_time_and_locale.timelocale.locale
}
resource "oneview_appliance_time_and_locale" "timelocale2" {
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = oneview.houston2
}
What I'd like to do is set it up in a way that I can do some sort of "for each provider, run the resource with the correct ntp_server variable", instead of writing a resource for every provider. So for each loop of the resource, it would use the right provider and also grab the right variable for the ntp server.
From what I've read, Terraform doesn't really use traditional for_each statements in a way that I'm used to, and I'm kind of stumped as to how to accomplish this. Does anyone have any suggestions?
Thank you very much for all your help!
resource "oneview_appliance_time_and_locale" "timelocale2" {
for_each = var.provider_list // List contain provider and its alias
locale = "en_US.UTF-8"
timezone = "UTC"
ntp_servers = ["10.50.0.50"]
provider = each.alias
}
Can we try this way, loop through the provider list.. Terraform is supporting the same.

How to edit AWS CloudFront setting to edit origin and origin group settings using terraform

I am particularly looking for way to have the following settings as shown in the image bellow. I want to make the S3 bucket restricted and choose to create new origin access identity as shown bellow.
Also it should make the update in S3 bucket policy, the settings might look different in image though.
In nutshell, I could not find or may be I didn't understand the official terraform documentations for achieving it.
You can use the below one for the reference,
resource "aws_cloudfront_distribution" "www" {
origin {
domain_name = "${var.bucket_name}.s3.amazonaws.com"
origin_id = "wwwS3Origin"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.html"
......
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "S3 bucket OAI"
}
Update bucket policy
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.example.arn}/*"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.example.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
}
resource "aws_s3_bucket_policy" "example" {
bucket = "${aws_s3_bucket.example.id}"
policy = "${data.aws_iam_policy_document.s3_policy.json}"
}
Refer the below link for more details
https://www.terraform.io/docs/providers/aws/r/cloudfront_origin_access_identity.html#updating-your-bucket-policy

Setting s3 bucket with replication using Terraform

I'm trying to configure s3 bucket with replication using Terraform. I'm getting the following error.
Error: insufficient items for attribute "destination"; must have at least 1
on main.tf line 114, in resource "aws_s3_bucket" "ps-db-backups":
114: lifecycle_rule {
I don't understand this error message. First in the replication section I have destination defined. Second the error message mentions lifecycle_rule which does not have
destination attribute. The bucket definition is below.
resource "aws_s3_bucket" "ps-db-backups" {
bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
acl = "private"
region = "eu-west-1"
versioning {
enabled = true
}
lifecycle_rule {
id = "transition"
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
expiration {
days = 180
}
}
replication_configuration {
role = "${aws_iam_role.ps-db-backups-replication.arn}"
rules {
id = "ps-db-backups-replication"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.ps-db-backups-replica.arn}"
storage_class = "STANDARD_IA"
}
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Go through the terraform docs carefully.
You need to create a separate terraform resource for destination like this one:
resource "aws_s3_bucket" "destination" {
bucket = "tf-test-bucket-destination-12345"
region = "eu-west-1"
versioning {
enabled = true
}
}
And then refer it in your replication_configuration as
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
I hope this helps. Try and let me know.
This appears to be a bug in Terraform 0.12.
See this issue https://github.com/terraform-providers/terraform-provider-aws/issues/9048
As a side note, if you also need to enable monitoring for S3 replication you won't be able to. Terraform does not have this implemented.
But there's a PR opened for this, please vote with a thumbs UP, https://github.com/terraform-providers/terraform-provider-aws/pull/11337

Resources