Terraform - Don't create log_config on firewall rules unless enabled - terraform

I'm trying to create some firewall rules in GCP which optionally have the logging turned on or not.
resource "google_compute_firewall" "this" {
name = var.name
project = var.project
network = var.network
source_ranges = var.source_ranges
source_tags = var.source_tags
target_tags = var.target_tags
priority = var.priority
direction = var.direction
allow {
protocol = lower(var.protocol)
ports = var.ports
}
## If log_config is defined, this enables logging. By not defining it, we are disabling logging.
log_config {
metadata = var.log_metadata
}
}
What I need to achieve, and can't figure out, is how to have the log_config block defined when I have var.firewall_logging set to true (or some other string if we can't use a boolean) and not have the log_config block at all when it's set to false. I tried using a dynamic, as in my other question Optional fields in resource however, even when set to false, it's still creating the log_config block.

Related

Error: Error creating AWSConfig rule: Failed to create AWSConfig rule: InvalidParameterValueException

I'm trying to add a an aws_config_config_rule resource with a set of input_parameters, but I keep getting
Error: Error creating AWSConfig rule: Failed to create AWSConfig rule: InvalidParameterValueException: Unknown parameters provided in the inputParameters: {"targetBucket":"mybucket"}.
# Enables access logging for the CloudTrail S3 bucket
resource aws_config_config_rule cloudtrail-s3-bucket-logging-enabled {
name = "cloudtrail-s3-bucket-logging-enabled"
description = "Checks whether logging is enabled for your S3 buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_LOGGING_ENABLED"
}
scope {
compliance_resource_id = aws_s3_bucket.mybucket.arn
compliance_resource_types = ["AWS::S3::Bucket"]
}
input_parameters = jsonencode({"targetBucket":"${aws_s3_bucket.mybucket.id}"})
}
I figured I could use the jsonencode function https://www.terraform.io/docs/configuration/functions/jsonencode.html. I came across a github issue: https://github.com/hashicorp/terraform/issues/14074, but it is different from what I'm experiencing. Any help would be greatly appreciated.
I was using the wrong input parameters for this rule. This works
# Ensures that the S3 bucket used by CloudTrail is not publicly accessible
resource aws_config_config_rule cloudtrail-s3-bucket-not-publicy-accessible {
name = "cloudtrail-s3-bucket-not-publicy-accessible"
description = "Checks whether the required public access block settings are configured from account level. The rule is only NON_COMPLIANT when the fields set below do not match the corresponding fields in the configuration item."
source {
owner = "AWS"
source_identifier = "S3_ACCOUNT_LEVEL_PUBLIC_ACCESS_BLOCKS"
}
scope {
compliance_resource_id = aws_s3_bucket.mybucket.id
compliance_resource_types = ["AWS::S3::Bucket"]
}
input_parameters = "{\"IgnorePublicAcls\":\"True\",\"BlockPublicPolicy\":\"True\",\"BlockPublicAcls\":\"True\",\"RestrictPublicBuckets\":\"True\"}"
}

Cloudflare page rules using terraform-cloudflare provider does not update page rules

I am using Terraform + Cloudflare provider.
I created a page rule the fist time I ran terraform plan + terraform apply.
Running the same command a second time returns the error:
Error: Failed to create page rule: error from makeRequest: HTTP status 400: content "{"success":false,"errors":[{"code":1004,"message":"Page Rule validation failed: See messages for details."}],"messages":[{"code":1,"message":".distinctTargetUrl: Your zone already has an existing page rule with that URL. If you are modifying only page rule settings use the Edit Page Rule option instead","type":null}],"result":null}"
TLDR: How can I make Terraform to update an existing page rule only by changing the definition in this file? Isn't it how this was supposed to work?
This is the terraform.tf file:
provider "cloudflare" {
email = "__EMAIL__"
api_key = "__GLOBAL_API_KEY__"
}
resource "cloudflare_zone_settings_override" "default_cloudflare_config" {
zone_id = "__ZONE_ID__"
settings {
always_online = "on"
always_use_https = "off"
min_tls_version = "1.0"
opportunistic_encryption = "on"
tls_1_3 = "zrt"
automatic_https_rewrites = "on"
ssl = "strict"
# 8 days
browser_cache_ttl = "691200"
}
}
resource "cloudflare_page_rule" "rule_bypass_wp_admin" {
target = "*.__DOMAIN__/*wp-admin*"
zone_id = "__ZONE_ID__"
priority = 2
status = "active"
actions {
always_use_https = true
always_online = "off"
cache_level = "bypass"
disable_apps = "true"
disable_performance = true
disable_security = true
}
}
Add the following line in your Page rule definition:
lifecycle {
ignore_changes = [priority]
}
This will instruct Terraform to ignore any changes in this field. That way when you run a terraform apply Terraform picks up the changes as an update to the existing resources as opposed to creating new resources.
In this case, Terraform tries to create a new Page rule which conflicts with Cloudflare limitation that you cannot have multiple page rules acting on the same resource path
TIP: Run terraform plan -out=tfplan this will print out the plan that will be applied on screen and to file. You then get some insight into the changes that Terraform will make and a chance to spot some unintended changes.
I still can't update via Terraform, so I used Python to delete it before recreating.
# Delete existing page rules using API before readding with Terraform
# For some reason, it I could not update then with Terraform without deleting first
# https://stackoverflow.com/questions/63942345/cloudflare-page-rules-using-terraform-cloudflare-provider-does-not-update-page-r
page_rules = cf.zones.pagerules.get(zone_id)
print(page_rules)
for pr in page_rules:
cf.zones.pagerules.delete(zone_id, pr.get('id'))
page_rules = cf.zones.pagerules.get(zone_id)
if page_rules:
exit('Failed to delete existing page rules for site')
Try removing the always_use_https argument so your actions block looks like this:
actions {
always_online = "off"
cache_level = "bypass"
disable_apps = "true"
disable_performance = true
disable_security = true
}
Today I discovered that there is some issue with this argument, it looks like a bug.

How can I overcome "Error: Cycle" on digitalocean droplet

I am sure this is a simple problem that I don't know how to interpret it at the moment.
I use 3 droplets(called rs) and have a template file which configures each.
[..]
data "template_file" "rsdata" {
template = file("files/rsdata.tmpl")
count = var.count_rs_nodes
vars = {
docker_version = var.docker_version
private_ip_rs = digitalocean_droplet.rs[count.index].ipv4_address_private
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
}
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
user_data = data.template_file.rsdata.rendered
ssh_keys = var.ssh_keys
depends_on = ["digitalocean_droplet.mysql"]
}
[..]
When I do a terraform apply I get:
Error: Cycle: digitalocean_droplet.rs, data.template_file.rsdata
Note this is terraform 0.12
What am I doing wrong and how can I overcome this please?
This error is returned because the data.template_file.rsdata resource refers to the digitalocean_droplet.rs resource and vice-versa. That creates an impossible situation for Terraform: there is no ordering Terraform could use to process these that would ensure that all of the necessary data is available at each step.
The key problem is that the private IPv4 address for a droplet is allocated as part of creating it, but the user_data is used as part of that creation and so it cannot include the IPv4 address that is yet to be allocated.
The most straightforward way to deal with this would be to not include the droplet's IP address as part of its user_data and instead arrange for whatever software is processing that user_data to fetch the IP address of the host directly from the network interface at runtime. The kernel running in that droplet will know what the IP address is, so you can retrieve it from there in principle.
If for some reason including the IP addresses in the user_data is unavoidable (this can occur, for example, if there are a set of virtual machines that all need to be aware of each other) then a more complex alternative is to separate the allocation of the IP addresses from the creation of the instances. DigitalOcean doesn't have a mechanism to create private network interfaces separately from the droplets they belong to, so in this case it would be necessary to use public IP addresses via digitalocean_floating_ip, which may not be appropriate for all situations:
resource "digitalocean_floating_ip" "rs" {
count = var.count_rs_nodes
region = var.region
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
ssh_keys = var.ssh_keys
user_data = templatefile("${path.module}/files/rsdata.tmpl", {
docker_version = var.docker_version
private_ip_rs = digitalocean_floating_ip.rs[count.index].ip_address
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
})
}
resource "digitalocean_floating_ip_assignment" "rs" {
count = var.count_rs_nodes
ip_address = digitalocean_floating_ip.rs[count.index].ip_address
droplet_id = digitalocean_droplet.rs[count.index].id
}
Because the "floating IP assignment" is created as a separate step after the droplet is launched, there will be some delay before the floating IP is associated with the instance and so whatever software is relying on that IP address must be resilient to running before the association is created.
Note that I also switched from using data "template_file" to the templatefile function because the data source is offered only for backward-compatibility with Terraform 0.11 configurations; the built-in function is the recommended way to render external template files in Terraform 0.12, and avoids the need for a separate configuration block.

Cannot contain self-reference in terraform cloudflare page rule

I want to create a pagerule to ensure all the incoming http traffic will be converted to https
Here is my rule:
resource "cloudflare_page_rule" "https-only" {
zone = "${var.domain}"
domain = "${var.domain}"
target = "http://*${self.domain}/*"
priority = 1
actions = {
always_use_https = true
}
}
The target line is based on the example provided by terraform
However when I run the terraform file, I get this error
Error: resource 'cloudflare_page_rule.https-only' config: cannot contain self-reference self.domain
Is the example no longer valid? If so, what is the proper syntax?

Terraform aws_lb_ssl_negotiation_policy using AWS Predefined SSL Security Policies

According to: https://www.terraform.io/docs/providers/aws/r/lb_ssl_negotiation_policy.html
You can create a new resource in order to have a ELB SSL Policy so you can customized any Protocol and Ciphers you want. However, I am looking to use Predefined Security Policies set by Amazon as
TLS-1-1-2017-01 or TLS-1-2-2017-01.
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html
Is there a way to use predefined policies instead of set a new custom policy?
Looking to solve the same problem, I came across this snippet here: https://github.com/terraform-providers/terraform-provider-aws/issues/822#issuecomment-311448488
Basically, you need to create two resources, the aws_load_balancer_policy, and the aws_load_balancer_listener_policy. In the aws_load_balancer_policy you set the policy_attribute to reference the Predefined Security Policy, and then set your listener policy to reference that aws_load_balancer_policy.
I've added a Pull Request to the terraform AWS docs to make this more explicit here, but here's an example snippet:
resource "aws_load_balancer_policy" "listener_policy-tls-1-1" {
load_balancer_name = "${aws_elb.elb.name}"
policy_name = "elb-tls-1-1"
policy_type_name = "SSLNegotiationPolicyType"
policy_attribute {
name = "Reference-Security-Policy"
value = "ELBSecurityPolicy-TLS-1-1-2017-01"
}
}
resource "aws_load_balancer_listener_policy" "ssl_policy" {
load_balancer_name = "${aws_elb.elb.name}"
load_balancer_port = 443
policy_names = [
"${aws_load_balancer_policy.listener_policy-tls-1-1.policy_name}",
]
}
At first glance it appears that this is creating a custom policy that is based off of the predefined security policy, but when you look at what's created in the AWS console you'll see that it's actually just selected the appropriate Predefined Security Policy.
To piggy back on Kirkland's answer, for posterity, you can do the same thing with aws_lb_ssl_negotation_policy if you don't need any other policy types:
resource "aws_lb_ssl_negotiation_policy" "my-elb-ssl-policy" {
name = "my-elb-ssl-policy"
load_balancer = "${aws_elb.my-elb.id}"
lb_port = 443
attribute {
name = "Reference-Security-Policy"
value = "ELBSecurityPolicy-TLS-1-2-2017-01"
}
}
Yes, you can define it. And the default Security Policy ELBSecurityPolicy-2016-08 has covered all ssl protocols you asked for.
Secondly, Protocol-TLSv1.2 covers both policies (TLS-1-1-2017-01 or TLS-1-2-2017-01) you asked for as well.
(http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
So make sure you enable it with below codes:
resource "aws_lb_ssl_negotiation_policy" "foo" {
...
attribute {
name = "Protocol-TLSv1.2"
value = "true"
}
}

Resources