need your help. I try to create policy which won't allow to upload objects which aren't encrypted either KMS or SSE. Also, I use s3 state bucket and dynamodb lock. Backend with s3 has encryption key KMS. During my creation I apply policy to the bucket which stores my terraform state file, it applies and then throw the following error:
│ Error: Failed to save state
│
│ Error saving state: failed to upload state: AccessDenied: Access Denied
│ status code: 403, request id: ************************, host id: **********************************
╵
╷
│ Error: Failed to persist state to backend
│
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working
│ directory.
│
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│
│ To retry writing this state, use the following command:
│ terraform state push errored.tfstate
│
My backend
backend "s3" {
profile = "default"
bucket = "bucket_name"
key = "my_state.tfstate"
region = "region"
kms_key_id = "arn_to_key"
dynamodb_table = "state_table"
}
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Sid_1",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn_to_account_role",
"arn_to_account_role"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket_name/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
},
{
"Sid": "Sid_2",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn_to_account_role",
"arn_to_account_role"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket_name/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": [
"aws:kms",
"AES256"
]
}
}
}
]
}
Current version terraform which I use is Terraform v0.15.4
Provider of AWS is ~> 3.20.0
Try adding encrypt = true to your backend config
backend "s3" {
profile = "default"
bucket = "bucket_name"
key = "my_state.tfstate"
region = "region"
kms_key_id = "arn_to_key"
dynamodb_table = "state_table"
encrypt = true
}
Related
Hi Stack overflow community,
I have some Terraform code that needs access to an object in a bucket that is located in a different AWS account than the one I'm deploying the Terraform to.
The AWS S3 bucket is in us-west-2 and I'm deploying the Terraform in us-east-1 (I don't think this should matter).
I set up the following bucket level policy in the S3 bucket:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<aws-account-number-where-terraform-will-be-deployed>:user/<user-deploying-terraform>"
},
"Action": [
"s3:GetObject*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::<bucket-name>/*",
"arn:aws:s3:::<bucket-name>"
]
},
]
}
When I run the following AWS CLI command I'm able to get the bucket object using the user that will be deploying the Terraform:
aws s3api get-object --bucket "<bucket-name>" --key "<path-to-file>" "test.txt"
But when I run the following Terraform code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "= 4.6.0"
}
}
}
data "aws_s3_object" "this" {
bucket = "<bucket-name>"
key = "<path-to-file>"
}
output "test" {
value = data.aws_s3_object.this.body
}
I get the following error:
Error: failed getting S3 Bucket (<bucket-name>) Object (<path-to-file>): BadRequest: Bad Request
status code: 400, request id: <id>, host id: <host-id>
with data.aws_s3_object.challenge_file,
on main.tf line 10, in data "aws_s3_object" "this":
10: data "aws_s3_object" "this" {
The provider configuration, as specified by AWS and Hashicorp, uses a single set of credentials, region, etc. You need a second provider configuration with an alias for the other region.
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
data "aws_s3_object" "this" {
provider = aws.us-west-2
bucket = "<bucket-name>"
key = "<path-to-file>"
}
if your supplied credentials are not sufficient for permissions to retrieve information about the bucket in the other account, then the provider configuration block will also need separate credentials.
I am trying to build a custom seccomp template for Azure Policy using Terraform and keep running into errors when adding multiple paramaters similar to how the templates are built. If I build these into Azure manually, I have no problems.
My Terraform is below, the error I keep getting in this example is
╷
│ Error: creating/updating Policy Definition "k8s_seccomp_governance": policy.DefinitionsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidPolicyRuleEffectDetails" Message="The policy definition 'k8s_seccomp_governance' rule is invalid. The policy effect 'details' property could not be parsed."
│
│ with azurerm_policy_definition.k8s_seccomp_governance,
│ on policy_definitions.tf line 1, in resource "azurerm_policy_definition" "k8s_seccomp_governance":
│ 1: resource "azurerm_policy_definition" "k8s_seccomp_governance" {
│
╵
Code:
resource "azurerm_policy_definition" "k8s_seccomp_governance" {
name = "k8s_seccomp_governance"
description = "Kubernetes cluster containers should only use allowed seccomp profiles"
policy_type = "Custom"
mode = "All"
display_name = "AMPS K8s Seccomp Governance"
metadata = <<METADATA
{
"category": "Kubernetes",
"version": "1.0.0"
}
METADATA
policy_rule = <<POLICY_RULE
{
"if": {
"field": "type",
"in": [
"AKS Engine",
"Microsoft.Kubernetes/connectedClusters",
"Microsoft.ContainerService/managedClusters"
]
},
"then": {
"effect": "[parameters('effect')]",
"details": {
"constraintTemplate": "https://store.policy.core.windows.net/kubernetes/allowed-seccomp-profiles/v2/template.yaml",
"constraint": "https://store.policy.core.windows.net/kubernetes/allowed-seccomp-profiles/v2/constraint.yaml",
"excludedNamespaces": "[parameters('excludedNamespaces')]"
}
}
}
POLICY_RULE
parameters = <<PARAMETERS
{
"effect": {
"type": "String",
"metadata": {
"displayName": "Effect",
"description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
},
"allowedValues": ["audit", "deny","disabled"],
"defaultValue": "audit"
},
"excludedNamespaces": {
"type": "Array",
"metadata": {
"displayName": "Namespace exclusions",
"description": "List of Kubernetes namespaces to exclude from policy evaluation."
},
"defaultValue": ["kube-system", "gatekeeper-system", "azure-arc"]
}
}
PARAMETERS
}
To add,
If I don't include description, then I get this error:
╷
│ Error: creating/updating Policy Definition "k8s_seccomp_governance": policy.DefinitionsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="UnusedPolicyParameters" Message="The policy 'k8s_seccomp_governance' has defined parameters 'excludedNamespaces' which are not used in the policy rule. Please either remove these parameters from the definition or ensure that they are used in the policy rule."
│
│ with azurerm_policy_definition.k8s_seccomp_governance,
│ on policy_definitions.tf line 1, in resource "azurerm_policy_definition" "k8s_seccomp_governance":
│ 1: resource "azurerm_policy_definition" "k8s_seccomp_governance" {
│
╵
I was able to resolve this, the problem was that I was using mode: "All" and needed to change it to mode = "Microsoft.Kubernetes.Data" for these to work
When I run Terraform apply I get this cryptic error message and I can't figure out how to resolve it.
$ terraform apply "cms-container.plan"
aws_ecs_task_definition.dev-cms_task: Creating...
Error: ClientException: Role is not valid
on ecs.tf line 19, in resource "aws_ecs_task_definition" "dev-cms_task":
19: resource "aws_ecs_task_definition" "dev-cms_task" {
Below is the code definition that I am using.
resource "aws_ecs_task_definition" "dev-cms_task" {
family = var.ecs_task_family
container_definitions = data.template_file.container_definition.rendered
cpu = var.ecs_task_cpu
memory = var.ecs_task_memory
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
task_role_arn = "arn:aws:iam::<account #>:role/FargateTaskRole"
execution_role_arn = "arn:aws:iam::<account #>:role/Fargate-ECSTaskExecutionRole"
tags = var.resource_tags
}
System Details:
Terraform version: 1.0.5
OS: Windows 10
I have tried different versions of terraform and I have also tried using roles from a remote state file.
Role Policy Definition:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
For both the task role and the task execution role the service that needs to be trusted is ecs-task.amazonaws.com rather than ecs.amazonaws.com.
So your trust relationship (or assume_role_policy in Terraform's aws_iam_role needs to look like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The ecs.amazonaws.com service is reserved for when ECS as a service needs to do things such as with the service-linked role.
I am deploying a Lambda function in each AWS Region of our account and encountering weird issue where the Apply is failing with the following error message for some of the AWS Regions
Error while Terraform Apply
Error: Error creating Lambda function: ResourceConflictException: Function already exist: log-forwarder
{
RespMetadata: {
StatusCode: 409,
RequestID: "8cfd7260-7c4a-42d2-98c6-6619c7b2804f"
},
Message_: "Function already exist: log-forwarder",
Type: "User"
}
The above Lambda function has just been created by same Terraform Apply that is failing.
The terraform plan and init doesn't throw any errors about having TF config issues.
Both plan and init runs successfully.
Below is my directory structure
.
├── log_forwarder.tf
├── log_forwarder_lambdas
│ └── main.tf
└── providers.tf
Below is my providers.tf file
provider "aws" {
region = "us-east-1"
version = "3.9.0"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
version = "3.9.0"
}
provider "aws" {
alias = "ca-central-1"
region = "ca-central-1"
version = "3.9.0"
}
... with all the AWS Regions.
Below is the tf config of log_forwarder.tf
terraform {
required_version = "0.12.25"
backend "s3" {
All the backend Config
}
}
resource "aws_iam_role" "log_forwarder" {
name = "LogForwarder"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "log_forwarder" {
name = "LogForwarder"
role = aws_iam_role.log_forwarder.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:ListTags",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*",
"arn:aws:lambda:*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": "*"
},
{
"Sid": "AWSDatadogPermissionsForCloudtrail",
"Effect": "Allow",
"Action": ["s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:ListObjects"],
"Resource": [
"arn:aws:s3:::BucketName",
"arn:aws:s3:::BucketName/*"
]
}
]
}
EOF
}
module "DDLogForwarderUSEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-east-1"
}
module "DDLogForwarderUSEast2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-east-2 }
region = "us-east-2"
}
module "DDLogForwarderUSWest1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-west-1 }
region = "us-west-1"
}
module "DDLogForwarderUSWest2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-west-2"
providers = { aws = aws.us-west-2 }
}
module "DDLogForwarderAPEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.ap-east-1 }
region = "ap-east-1"
}
module "DDLogForwarderAPSouth1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "ap-south-1"
providers = { aws = aws.ap-south-1 }
}
... All AWS Regions with different providers
TF Config of log_forwarder_lambdas/main.tf
variable "region" {}
variable "account_id" {
default = "AWS Account Id"
}
variable "dd_log_forwarder_role" {}
variable "exclude_at_match" {
default = "([A-Z]* RequestId: .*)"
}
data "aws_s3_bucket" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
bucket = "BucketName"
}
resource "aws_lambda_function" "log_forwarder" {
filename = "${path.cwd}/log_forwarder_lambdas/aws-dd-forwarder-3.16.3.zip"
function_name = "log-forwarder"
role = var.dd_log_forwarder_role
description = "Gathers logs from targetted Cloudwatch Log Groups and sends them to DataDog"
handler = "lambda_function.lambda_handler"
runtime = "python3.7"
timeout = 600
memory_size = 1024
layers = ["arn:aws:lambda:${var.region}:464622532012:layer:Datadog-Python37:11"]
environment {
variables = {
DD_ENHANCED_METRICS = false
EXCLUDE_AT_MATCH = var.exclude_at_match
}
}
}
resource "aws_cloudwatch_log_group" "log_forwarder" {
name = "/aws/lambda/${aws_lambda_function.log_forwarder.function_name}"
retention_in_days = 90
}
resource "aws_lambda_permission" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.log_forwarder.arn
principal = "s3.amazonaws.com"
source_arn = element(data.aws_s3_bucket.cloudtrail_bucket.*.arn, count.index)
}
resource "aws_s3_bucket_notification" "cloudtrail_bucket_notification" {
count = var.region == "us-west-2" ? 1 : 0
bucket = element(data.aws_s3_bucket.cloudtrail_bucket.*.id, count.index)
lambda_function {
lambda_function_arn = aws_lambda_function.log_forwarder.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [aws_lambda_permission.cloudtrail_bucket, aws_cloudwatch_log_group.log_forwarder]
}
I am using TF 0.12.25 in this case.
The things I have tried so far.
Remove the .terraform folder from the root module every time I run the Terraform init/plan/apply cycle
I have tried to refactor code as much as possible.
I am running the TF Plan/Apply cycle locally without any CI.
At first glance it looks as though the Lambda function may not be in your Terraform state (for whatever reason). Have you changed backends / deleted data off your backend?
Run a terraform show and/or terraform state show and see if the conflicting Lambda function is in your state.
If it is not, but it already exists in AWS, you can import it.
See here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#import
Update:
As per your coment, since the resource exists in AWS but not in the state, this is an expected error. (Terraform doesn't know the resource exists, therefore tries to create it; AWS knows it already exists, therefore returns an error.)
You have two choices:
Delete the resource in AWS and run Terraform again; or
Import the existing recource into Terraform (recomended).
Try something like:
terraform import module.DDLogForwarderUSEast1.aws_lambda_function.log-forwarder log-forwarder
(Make sure you have the correct provider/region set up if trying this for other regions!)
I want to create a list of S3 buckets and limit access to them to one user. That user should only have access to that bucket and no permissions to do other things in AWS.
I created my list as so (bucket names are not real in this example):
// List bucket names as a variable
variable "s3_bucket_name" {
type = "list"
default = [
"myfirstbucket",
"mysecondbucket",
...
]
}
Then I create a user.
// Create a user
resource "aws_iam_user" "aws_aim_users" {
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
path = "/"
}
I then create an access key.
// Create an access key
resource "aws_iam_access_key" "aws_iam_access_keys" {
count = "${length(var.s3_bucket_name)}"
user = "${var.s3_bucket_name[count.index]}"
// user = "${aws_iam_user.aws_aim_user.name}"
}
Now I create a user policy
// Add user policy
resource "aws_iam_user_policy" "aws_iam_user_policies" {
// user = "${aws_iam_user.aws_aim_user.name}"
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
user = "${var.s3_bucket_name[count.index]}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
...
],
"Resource": "${var.s3_bucket_name[count.index].arn}}"
}
]
}
EOF
}
Now I create my buckets with the user attached.
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"${var.s3_bucket_name[count.index].arn}}"
"${var.s3_bucket_name[count.index].arn}/*}"
},
"Principal": {
"AWS": "${var.s3_bucket_name[count.index]}"
}
}
]
}
POLICY
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
}
}
The problem I have is it doesn't like where I have set the ARN in the policy by using my variable.
I also believe I need to use the user.arn not the bucket, although they should have the same name. What am I doing wrong here?
I think I see a few things that might be able to help you out.
The bucket policy resource options aren't going to use the arn of the bucket, they're looking for the actual bucket name so it would look like this "arn:aws:s3:::my-bucket".
I also see a few extra }'s in your setup there which could also be causing problems.
and,,, terraform is on version 0.12 which removes the need for the {$"resource.thing"} and replaces it with resource.thing instead. They have a helpful terraform 0.12upgrade command to run that upgrades the files which is nice. With terrafor 0.12 they adjusted how the resource creation like you have is being done. https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/