Having trouble with a applying a bucket policy via Terraform - terraform

I had this workign at one point but I may have screwed something up or this is a bug. I thought maybe it was a race condition and tried a few depends_on but still no luck. I can't seem to figure this out but I do know S3 policies can be challenging with buckets and terraform. Does anyone see anything obvious I am doing wrong?
resource "aws_s3_bucket_policy" "ct-s3-bucket-policy" {
bucket = aws_s3_bucket.mylab-s3-bucket-ct.id
policy = "${data.aws_iam_policy_document.default.json}"
}
resource "aws_cloudtrail" "mylab-cloudtrail" {
name = "mylab-cloudtrail"
s3_bucket_name = aws_s3_bucket.mylab-s3-bucket-ct.id
s3_key_prefix = "CT"
include_global_service_events = true
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
}
resource "aws_s3_bucket" "mylab-s3-bucket-ct" {
bucket = "mylab-s3-bucket-ct-1231764516123"
force_destroy = true
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = aws_s3_bucket.mylab-s3-bucket-ct.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.s3-kms.arn
sse_algorithm = "aws:kms"
}
}
}
data "aws_iam_policy_document" "default" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl",
]
resources = [
"arn:aws:s3:::${var.cloudtrailbucketname}",
]
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:PutObject",
]
resources = [
"arn:aws:s3:::${var.cloudtrailbucketname}/*",
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control",
]
}
}
}
this is the error I see at the end. The bucket creates but the policy wont attach.
╷
│ Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: HAK8J85M98TGTHQ4, host id: Qn2mqAJ+oKcFiCD52KfLG+10/binhRn2YUQX6MARTbW4MbV4n+P5neAXg8ikB7itINHOL07DV+I=
│
│ with aws_s3_bucket_policy.ct-s3-bucket-policy,
│ on main.tf line 126, in resource "aws_s3_bucket_policy" "ct-s3-bucket-policy":
│ 126: resource "aws_s3_bucket_policy" "ct-s3-bucket-policy" {
│
╵
╷
│ Error: Error creating CloudTrail: InsufficientS3BucketPolicyException: Incorrect S3 bucket policy is detected for bucket: mylab-s3-bucket-ct-1231764516123
│
│ with aws_cloudtrail.mylab-cloudtrail,
│ on main.tf line 131, in resource "aws_cloudtrail" "mylab-cloudtrail":
│ 131: resource "aws_cloudtrail" "mylab-cloudtrail" {
│
EDIT: For clarity, this ONLY happens with applied, planning works fine.

I believe you have a to have a dependency between the bucket policy and CloudTrail trail, like this:
resource "aws_cloudtrail" "mylab-cloudtrail" {
name = "mylab-cloudtrail"
s3_bucket_name = aws_s3_bucket.mylab-s3-bucket-ct.id
s3_key_prefix = "CT"
include_global_service_events = true
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
depends_on = [
aws_s3_bucket_policy.ct-s3-bucket-policy
]
}
If you don't have this dependency, Terraform will try to create the trail before having the necessary policy attached to the bucket.
Also, probably you would want to reference the bucket name in the policy and avoid using a var.cloudtrailbucketname:
data "aws_iam_policy_document" "default" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl",
]
resources = [
"arn:aws:s3:::${aws_s3_bucket.mylab-s3-bucket-ct.id}" # Get the bucket name
]
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:PutObject",
]
resources = [
"arn:aws:s3:::${aws_s3_bucket.mylab-s3-bucket-ct.id}/*", # Get the bucket name
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control",
]
}
}
}

Original resource call
"arn:aws:s3:::${var.cloudtrailbucketname}/*",
Changed to this and it worked. I reference it instead of building the string. For whatever reason, the JSON was malformed.
resources = ["${aws_s3_bucket.mylab-s3-bucket-ct.arn}/*"]
#Erin for helping me get to the right direction

Related

Using Terraform 1.1.x, nested block inputs are not being iterated, but just once instead

I am working on automating some PagerDuty resources and first started on service dependencies. What I thought was a fairly straight forward solve has proven not to be true. When we create service dependencies, we want to be able to attach many dependencies either that the service USES or that the service is USED BY. With this in mind, I created the following resource block in Terraform.
resource "pagerduty_service_dependency" "this" {
count = var.create_dependencies ? 1 : 0
dynamic "dependency" {
for_each = var.dependency
content {
type = dependency.value.type
dynamic "dependent_service" {
for_each = dependency.value.dependent_service
content {
id = dependent_service.value.id
type = dependent_service.value.type
}
}
dynamic "supporting_service" {
for_each = dependency.value.supporting_service
content {
id = supporting_service.value.id
type = supporting_service.value.type
}
}
}
}
}
With this block as the variable
variable "dependency" {
description = "value"
type = any
default = []
}
And using this block for inputs (Using Terragrunt 0.36.x)
// Service Dependencies
create_dependencies = true
dependency = [{
type = "service"
dependent_service = [
{
id = "DD4V04U" // The service we are applying the dependency to.
type = "service"
}
]
supporting_service = [
{
id = "BBF2LHB" // The service that is being used by "DD4V04U"
type = "service"
},
{
id = "AAILTY1" // Another service being used by "DD4V04U"
type = "service"
}
]
}]
The Terraform will apply the change but will only create the last dependency in the list(object). Subsequent additions to this list of objects result in all other dependencies before it being destroyed except for the last object in the list. This applies for both dependent and supporting service lists of objects.
The original path I went down in terms of providing inputs looked like this:
// Service Dependencies
create_dependencies = true
dependency = [
{
type = "service"
dependent_service = [{
id = "DD4V04U"
type = "service"
}],
supporting_service = [{
id = "BBF2LHB"
type = "service"
}]
},
{
type = "service"
dependent_service = [{
id = "DD4V04U"
type = "service"
}],
supporting_service = [{
id = "BBF2LHB"
type = "service"
}]
}
]
...and the API reponds
╷
│ Error: Too many dependency blocks
│
│ on main.tf line 160, in resource "pagerduty_service_dependency" "this":
│ 160: content {
│
│ No more than 1 "dependency" blocks are allowed
╵
Which seems to go against what the API documentation states can be done.
Any insight here as to what I am missing would be highly appreciated. Thanks for looking.
Cheers

cycle error when creating terraform code for sqs and its access policy

i try to create sqs queue and attach access policy to it, The policy is of type "data" - no actual resource is created , its just attached to the newly created sqs queue.
╷
│ Error: Cycle: data.aws_iam_policy_document.sqs_vote_policy, aws_sqs_queue.sqs_vote
│
the tf code:
resource "aws_sqs_queue" "sqs_vote" {
name = "sqs-erjan"
delay_seconds = 0
message_retention_seconds = 86400
receive_wait_time_seconds = 0
policy = data.aws_iam_policy_document.sqs_vote_policy.json
}
data "aws_iam_policy_document" "sqs_vote_policy" {
policy_id = "__default_policy_ID"
statement {
sid = "__console_sub_0"
actions = ["SQS:SendMessage"]
resources = [aws_sqs_queue.sqs_vote.arn]
principals {
type = "AWS"
identifiers = ["*"]
}
effect = "Allow"
condition {
test = "ArnLike"
variable = "AWS:SourceArn"
values = [
aws_sns_topic.vote_sns.arn
]
}
}
statement {
sid = "__owner_statement"
actions = ["SQS:*"]
resources = [aws_sqs_queue.sqs_vote.arn]
principals {
type = "arn:aws:iam::025416187662:root"
identifiers = ["*"]
}
effect = "Allow"
}
# i put depends on to make sure it runs first - but it still gives cycle error
depends_on = [
aws_sqs_queue.sqs_vote,aws_sns_topic.vote_sns
]
}
how to fix it?
Change aws_sqs_queue to:
resource "aws_sqs_queue" "sqs_vote" {
name = "sqs-erjan"
delay_seconds = 0
message_retention_seconds = 86400
receive_wait_time_seconds = 0
}
and use aws_sqs_queue_policy to attach the policy to the queue:
resource "aws_sqs_queue_policy" "test" {
queue_url = aws_sqs_queue.sqs_vote.id
policy = data.aws_iam_policy_document.sqs_vote_policy.json
}

Produce repeating blocks inside a terraform resource

I am fairly new to terraform and trying to create a google_compute_backend_service using terraform and there are multiple backend blocks inside the resource as shown below:
resource "google_compute_backend_service" "app-backend" {
log_config {
enable = "true"
sample_rate = "1"
}
name = "app-backend"
port_name = "http-34070"
project = "my-project"
protocol = "HTTP"
session_affinity = "NONE"
timeout_sec = "30"
backend {
group = "insatnce-group1"
}
backend {
group = "instance-group2"
}
backend {
group = "instance-group3"
}
health_checks = [google_compute_http_health_check.app-http-l7.name]
}
As seen in the code block above the backend block repeats multiple times. I want to make it dynamic so I do not have to write multiple blocks manually.
I tried the following:
Created a variable in the variables.tf file that contains all the instance groups:
variable "groups" {
type = list(object({
name = string
}))
default = [{ name = "instance-group1"},
{ name = "instance-group2"},
{ name = "instance-group3"}]
}
And modified my resource block to this:
resource "google_compute_backend_service" "app-backend" {
log_config {
enable = "true"
sample_rate = "1"
}
name = "app-backend"
port_name = "http-34070"
project = "my-project"
protocol = "HTTP"
session_affinity = "NONE"
timeout_sec = "30"
dynamic "backend" {
for_each = var.groups
iterator = item
group = item.value.name
}
health_checks = [google_compute_http_health_check.app-http-l7.name]
}
However, when I execute terraform plan I get the following error:
Error: Unsupported argument
│
│ on backend_service.tf line 15, in resource "google_compute_backend_service" "app-backend":
│ 15: group = item.value.name
│
│ An argument named "group" is not expected here.
Where am I going wrong? Is there a better way to achieve this?
You can check the dynamic blocks documentation for the syntax. Otherwise, you had the right idea.
dynamic "backend" {
for_each = var.groups
content {
group = backend.value.name
}
}
You can also simplify the variable structure to make this even easier.
variable "groups" {
type = set(string)
default = ["instance-group1", "instance-group2", "instance-group3"]
}
dynamic "backend" {
for_each = var.groups
content {
group = backend.value
}
}

Because data.zabbix_template.template has "for_each" set, its attributes must be accessed on specific instances

I'm tring to create a zabbix template with applications defined and trigger.
I can create the template, import my hosts and associate to it.
Now when I try to add the trigger to the template, I receive the error in the object.
this is my
data.tf
data "zabbix_hostgroup" "group" {
name = "Templates"
}
data "zabbix_template" "template" {
for_each = {
common_simple = { name = "Common Simple" }
common_snmp = { name = "Common SNMP" }
class_template = { name = var.class_names[var.class_id] }
}
name = each.value.name
}
data "zabbix_proxy" "proxy" {
for_each = {
for inst in var.instances :
"${inst.instance}.${inst.site}" => inst.site
}
#host = "zabpxy01.${each.value}.mysite.local"
host = "mon-proxy1.${each.value}.mtsite.local"
}
and this is my hosts.tf:
# create host group for specific to service
resource "zabbix_hostgroup" "hostgroup" {
name = var.class_names[var.class_id]
}
# create template
resource "zabbix_template" "template" {
host = var.class_id
name = var.class_names[var.class_id]
description = var.class_names[var.class_id]
groups = [
data.zabbix_hostgroup.group.id
]
}
# create application
resource "zabbix_application" "application" {
hostid = data.zabbix_template.template.id
name = var.class_names[var.class_id]
}
# create snmp disk_total item
resource "zabbix_item_snmp" "disk_total_item" {
hostid = data.zabbix_template.template.id
key = "snmp_disk_root_total"
name = "Disk / total"
valuetype = "unsigned"
delay = "1m"
snmp_oid="HOST-RESOURCES-MIB::hrStorageSize[\"index\", \"HOST-RESOURCES-MIB::hrStorageDescr\", \"/\"]"
depends_on = [
data.zabbix_template.template
]
}
# create snmp disk_used item
resource "zabbix_item_snmp" "disk_used_item" {
hostid = data.zabbix_template.template.id
key = "snmp_disk_root_used"
name = "Disk / used"
valuetype = "unsigned"
delay = "1m"
snmp_oid="HOST-RESOURCES-MIB::hrStorageUsed[\"index\", \"HOST-RESOURCES-MIB::hrStorageDescr\", \"/\"]"
depends_on = [
data.zabbix_template.template
]
}
# create trigger > 75%
resource "zabbix_trigger" "trigger" {
name = "Disk Usage 75%"
expression = "({${data.zabbix_template.template.host}:${zabbix_item_snmp.disk_used_item.key}.last()} / {${data.zabbix_template.template.host}:${zabbix_item_snmp.disk_total_item.key}.last()}) * 100 >= 75"
priority = "warn"
enabled = true
multiple = false
recovery_none = false
manual_close = false
}
# create hosts
resource "zabbix_host" "host" {
for_each = {
for inst in var.instances : "${var.class_id}${format("%02d", inst.instance)}.${inst.site}" => inst
}
host = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["hostname"]
name = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["hostname"]
enabled = false
proxyid = data.zabbix_proxy.proxy["${each.value.instance}.${each.value.site}"].id
groups = [
zabbix_hostgroup.hostgroup.id
]
templates = concat ([
data.zabbix_template.template["common_simple"].id,
data.zabbix_template.template["common_snmp"].id,
zabbix_template.template.id
])
# add SNMP interface
interface {
type = "snmp"
ip = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
main = true
port = 161
}
# Add Zabbix Agent interface
interface {
type = "agent"
ip = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
main = true
port = 10050
}
macro {
name = "{$INTERFACE_MONITOR}"
value = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
}
macro {
name = "{$SNMP_COMMUNITY}"
value = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["snmp"]
}
depends_on = [
zabbix_hostgroup.hostgroup,
data.zabbix_template.template,
data.zabbix_proxy.proxy,
]
}
output "class_template_id" {
value = zabbix_template.template.id
description = "Template ID of created class template for items"
}
When I run "Terraform plan" I receive the error:
Error: Missing resource instance key │ │ on hosts/hosts.tf line 26,
in resource "zabbix_application" "application": │ 26: hostid =
data.zabbix_template.template.id │ │ Because
data.zabbix_template.template has "for_each" set, its attributes must
be accessed on specific instances. │ │ For example, to correlate with
indices of a referring resource, use: │
data.zabbix_template.template[each.key]
Where is my error?
Thanks for the support
UPDATE
I tried to use
output "data_zabbix_template" {
value = data.zabbix_template.template
}
but I don't see any output when I run terraform plan
I tried to modify in:
hostid = data.zabbix_template.template.class_template.id
but I continue to receive the same error:
Error: Missing resource instance key on hosts/hosts.tf line 27, in
resource "zabbix_application" "application": 27: hostid =
data.zabbix_template.template.class_template.id Because
data.zabbix_template.template has "for_each" set, its attributes must
be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
data.zabbix_template.template[each.key]
Error: Unsupported attribute on hosts/hosts.tf line 27, in resource
"zabbix_application" "application": 27: hostid =
data.zabbix_template.template.class_template.id This object has no
argument, nested block, or exported attribute named "class_template".
UPDATE:
My script for each host taht I'll add, set two existing template ("Common Simple" and "Common SNMP") and create a new template as below:
# module.mytemplate-servers_host.zabbix_template.template will be created
+ resource "zabbix_template" "template" {
+ description = "mytemplate-servers"
+ groups = [
+ "1",
]
+ host = "mytemplate-servers"
+ id = (known after apply)
+ name = "mytemplate-servers"
}
Now my scope is to add on this template an application and set two items and one trigger
When you use for_each in a data source or resource, the output of that data source or resource is a map, where the keys in the map are the same as the keys in the for_each and the values are the regular output of that data/resource for the given input value with that key.
Try using:
output "data_zabbix_template" {
value = data.zabbix_template.template
}
And you'll see what I mean. The output will look something like:
data_zabbix_template = {
common_simple = {...}
common_snmp = {...}
class_template = {...}
}
So in order to use this data source (on the line where the error is being thrown), you need to do:
hostid = data.zabbix_template.template.common_simple.id
And replace common_simple in that line with whichever key in the for_each you want to use. You'll need to do this everywhere that you use data.zabbix_template.template.

Terraform AWS IAM Iterate Over Rendered JSON Policies

How can I iterate over the JSON rendered data.aws_iam_policy_document documents within an aws_iam_policy?
data "aws_iam_policy_document" "role_1" {
statement {
sid = "CloudFront1"
actions = [
"cloudfront:ListDistributions",
"cloudfront:ListStreamingDistributions"
]
resources = ["*"]
}
}
data "aws_iam_policy_document" "role_2" {
statement {
sid = "CloudFront2"
actions = [
"cloudfront:CreateInvalidation",
"cloudfront:GetDistribution",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations"
]
resources = ["*"]
}
}
variable "role_policy_docs" {
type = list(string)
description = "Policies associated with Role"
default = [
"data.aws_iam_policy_document.role_1.json",
"data.aws_iam_policy_document.role_2.json",
]
}
locals {
role_policy_docs = { for s in var.role_policy_docs: index(var.role_policy_docs, s) => s}
}
resource "aws_iam_policy" "role" {
for_each = local.role_policy_docs
name = format("RolePolicy-%02d", each.key)
description = "Custom Policies for Role"
policy = each.value
}
resource "aws_iam_role_policy_attachment" "role" {
for_each = { for p in aws_iam_policy.role : p.name => p.arn }
role = aws_iam_role.role.name
policy_arn = each.value
}
This example has been reduced down to the very basics. The policy documents are dynamically generated with the source_json and override_json conventions. I cannot simply combine the statements into a single policy document.
Terraform Error:
Error: "policy" contains an invalid JSON policy
on role.tf line 35, in resource "aws_iam_policy" "role":
35: policy = each.value
This:
variable "role_policy_docs" {
type = list(string)
description = "Policies associated with Role"
default = [
"data.aws_iam_policy_document.role_1.json",
"data.aws_iam_policy_document.role_2.json",
]
}
Is literally defining those default values as strings, so what you're getting is this:
+ role_policy_docs = {
+ 0 = "data.aws_iam_policy_document.role_1.json"
+ 1 = "data.aws_iam_policy_document.role_2.json"
}
If you tried removing the quotations around the data blocks, it will not be valid because you cannot use variables in default definitions. Instead, assign your policy documents to a new local, and use that local in your for loop instead:
locals {
role_policies = [
data.aws_iam_policy_document.role_1.json,
data.aws_iam_policy_document.role_2.json,
]
role_policy_docs = {
for s in local.role_policies :
index(local.role_policies, s) => s
}
}

Resources