How can you ignore data resources in a terraform plan not running against the real aws infrastructure? - terraform

I would like to run a 'terraform plan' for validation that doesn't run against the real aws infrastructure.
My provider configuration for this test is:
provider "aws" {
region = "eu-central-1"
access_key = "mock_access_key"
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_region_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
Here is a small terraform code example.
# terraform code
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "iam_for_lambda" {
name = "lambda-role-name"
inline_policy {
name = "sts"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["sts:AssumeRole"]
Effect = "Allow"
Resource = "arn:aws:iam::${data.aws_caller_identity.current.id}:role/myrole"
},
]
})
}
}
Everything works as expected except the data resources (from the terraform aws provider). At the moment, terraform is trying to resolve the data resources in the plan and aborts the plan with the following error message.
module.moduletest.data.aws_caller_identity.current: Still reading... [20s elapsed]
╷
│ Error: reading STS Caller Identity
│
│ with module.moduletest.data.aws_caller_identity.current,
│ on ../data.tf line 2, in data "aws_caller_identity" "current":
│ 2: data "aws_caller_identity" "current" {}
│
│ RequestError: send request failed
│ caused by: Post "http://localhost:4566/": dial tcp 127.0.0.1:4566: connect: connection refused
How can you mock the data resources - lookups on aws resources? Are there any additional settings in my provider config necessary?
EDIT
Currently the AWS config are set via environment variables.
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalxxxxEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=eu-central-1

Related

Terraform: Set an AWS Resource's provider value via a module variable

I have created a module I want to use across multiple providers (just two AWS providers for 2 regions). How can I set a resource's provider value via variable from a calling module? I am calling a module codebuild.tf (which I want to be region agnostic) from a MGMT module named cicd.tf - Folder structure:
main.tf
/MGMT/
-> cicd.tf
/modules/codebuild/
-> codebuild.tf
main.tf:
terraform {
required_version = ">= 1.0.10"
backend "s3" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# default AWS provider for MGMT resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
}
# DEV Account resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-east-1"
}
# DEV Account resources in us-west-2 and global
provider "aws" {
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-west-2"
}
module "MGMT" {
source = "./MGMT"
count = var.aws_env == "MGMT" ? 1 : 0
aws_env = var.aws_env
}
When I build my TF, its under the MGMT AWS account which uses the the default aws provider that doesn't have an alias - I am then trying to set a provider with an AWS IAM Role (that's cross account) when I am calling the module (I made the resource a module because I want to run it in multiple regions):
/MGMT/cicd.tf:
# DEV in cicd.tf
# create the codebuild resource in the assumed role's us-east-1 region
module "dev_cicd_iac_us_east_1" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-east-1"
input_aws_env = var.dev_aws_env
}
# create the codebuild resource in the assumed role's us-west-2 region
module "dev_cicd_iac_us_west_2" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-west_2"
input_aws_env = var.dev_aws_env
}
/modules/codebuild/codebuild.tf:
# Code Build resource here
variable "input_aws_provider" {}
variable "input_aws_env" {}
resource "aws_codebuild_project" "codebuild-iac" {
provider = tostring(var.input_aws_provider) # trying to make it a string, with just the var there it looks for a var provider
name = "${var.input_aws_env}-CodeBuild-IaC"
# etc...
}
I get the following error when I plan the above:
│ Error: Invalid provider reference
│ On modules/codebuild/codebuild.tf line 25: Provider argument requires
│ a provider name followed by an optional alias, like "aws.foo".
How can I make the provider value a proper reference to the aws provider defined in main.tf while still using a MGMT folder/module file named cicd.tf?

Terraform and AWS Assume Role

Given existence of first_profile in ~/.aws/credentials
[first_profile]
aws_access_key_id=ACOYHFVDLCHVNOISYGV
aws_secret_access_key=RApidgudsphAFdIK+097dslvxchnv
and a backend_role whose role_arn is arn:aws:iam::123456789101:role/roleA in ~/.aws/config
[profile roleA]
role_arn=arn:aws:iam::123456789101:role/roleA
source_profile=first_profile
using aws cli, I confirm that first_profile can assume backend_role and has permissions to an s3 bucket and dynamodb table by running:
aws s3 ls s3://random-tf-state-bucket --profile backend_role
aws dynamodb describe-table --table-name random-tf-state-lock-table --profile backend_role --region us-east-2
The above commands do not return (AccessDenied) thus conforming access
Expectation:
According to terraform documentation/blog and given a main.tf file set up like the below:
terraform {
required_version = "1.0.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.53.0"
}
}
}
terraform {
backend "s3" {
}
}
provider "aws" {
region = "us-eat-1"
profile = "first_profile"
shared_credentials_file = "~/.aws/credentials"
assume_role {
role_arn = "role_arn=arn:aws:iam::123456789101:role/roleA"
}
}
and s3.backend.tfvars file:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
running terraform init -backend-config=s3.backend.tfvars should work.
Result:
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Question:
What step in this process am I missing?
Similar issue reported here was helpful in getting a solution.
Solution:
The key to this was realizing that the profile used to configure the S3 backend is its own thing - it is not tied to provider block.
Thus s3.backend.tfvars ends up like this:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
profile = "roleA"
and the provider block ended up looking like:
provider "aws" {
region = var.aws_region
profile = var.some_other_profile
assume_role {
role_arn = "some_other_role_to_assume"
}
}

Terraform scripts throw " Invalid AWS Region: {var.AWS_REGION}"

when I run "terraform apply" I am getting the following error. I made sure my AMI is in us-west-1 region.
not sure what else could be the problem
PS C:\terraform> terraform apply
Error: Invalid AWS Region: {var.AWS_REGION}
terraform.tfvars file
AWS_ACCESS_KEY="zzz"
AWS_SECRET_KEY="zzz"
provider.tf file
provider "aws"{
access_key = "{var.AWS_ACCESS_KEY}"
secret_key = "{var.AWS_SECRECT_KEY}"
region = "{var.AWS_REGION}"
}
vars.tf file
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" {
default = "us-west-1"
}
variable "AMIS"{
type = map(string)
default ={
us-west-1 = "ami-0948be9af4ee55d19"
}
}
instance.tf
resource "aws_instance" "example"{
ami = "lookup(var.AMIS,var.AWS_REGION)"
instance_type = "t2.micro"
}
You are literally passing the strings "{var.AWS_ACCESS_KEY}" "{var.AWS_SECRET_KEY}" and "{var.AWS_REGION}" to the provider
Try this if you are using terraform 12+:
provider "aws"{
access_key = var.AWS_ACCESS_KEY
secret_key = var.AWS_SECRET_KEY
region = var.AWS_REGION
}
if you are using terraform older than 0.12 then it should be set like this using the $ sign.
provider "aws"{
access_key = ${var.AWS_ACCESS_KEY}
secret_key = ${var.AWS_SECRET_KEY}
region = ${var.AWS_REGION}
}

Problems creating cloudwatch subscription filter using terraform and localstack

I'm trying to get cloudwatch invoke a lambda function using Terraform and localstack.
I'm trying to use the aws_cloudwatch_log_subscription_filter but every time I try to run terraform apply I get the following error:
Error: Error creating Cloudwatch log subscription filter: InvalidParameterException:
Could not execute the lambda function.
Make sure you have given CloudWatch Logs permission to execute your function.
I understand that I need to use the aws_lambda_permission resource, but something is failing and I can't figure out what it is. My best guess is that I'm missing some role or permissions. I'm a bit new to most of this so I'm probably just missing something obvious. Here are my configuration files:
docker-compose.yml - For running localstack
version: "2.1"
services:
localstack:
image: localstack/localstack:0.11.2
ports:
- "4566:4566"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=iam,lambda,logs
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=docker-reuse
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
main.tf
# Localstack provider
provider "aws" {
profile = "local"
region = "us-east-1"
access_key = "fake"
secret_key = "fake"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
lambda = "http://localhost:4566"
iam = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
cloudwatchlogs = "http://localhost:4566"
}
}
# IAM Role for the Lambda Function
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
description = "Just a test role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
variable "lambda_zip" {
default="dist/lambda.zip"
}
# This is the lambda function that is generating the cloudwatch logs
resource "aws_lambda_function" "simple_log" {
function_name = "simple_log"
filename = var.lambda_zip
handler = "myapp.lambdas.do_simple_log"
runtime = "python3.7"
source_code_hash = filebase64sha256(var.lambda_zip)
role = aws_iam_role.iam_for_lambda.arn
}
# This is the lambda I want to trigger when cloudwatch logs trigger
resource "aws_lambda_function" "log_metrics" {
function_name = "log_metrics"
filename = var.lambda_zip
handler = "myapp.lambdas.collect_log_metrics"
runtime = "python3.7"
source_code_hash = filebase64sha256(var.lambda_zip)
role = aws_iam_role.iam_for_lambda.arn
}
# Pretty sure this is broken, but I don't know why.
resource "aws_cloudwatch_log_subscription_filter" "logs_to_initial_lambda" {
name = "logs_to_initial_lambda"
log_group_name = "/aws/lambda/simple_log"
destination_arn = aws_lambda_function.log_metrics.arn
filter_pattern = ""
}
# These are all the various permissions I've tried with no success.
# They all have the same error.
resource "aws_lambda_permission" "allow_cloudwatch1" {
statement_id = "my_id_1"
action = "lambda:InvokeFunction"
# Tried "000000000000", also didn't work
principal = "logs.amazonaws.com"
# Tried aws_lambda_function.log_metrics.function_name
function_name = aws_lambda_function.log_metrics.arn
# Tried various source_arns
# source_arn = "arn:aws:logs:us-east-1:000000000000:log-group:*:*"
# source_arn = "arn:aws:logs:us-east-1:000000000000:log-group:*:*"
}
Any help would be greatly appreciated.

Unable to create 5 buckets in terraform

I have following code:
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
I am using terraform version .12. It keeps on running and gives me following error:
Error creating S3 bucket: Error creating S3 bucket name-a, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
Nothing wrong with the code.
provider "aws" {
region = "us-east-2"
shared_credentials_file = "/root/.aws/credentials"
profile = "default"
}
variable name {
default=["demo-123.com","demo-124.com","demo-125.com"]
}
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
Code seems perfectly fine to me and running well, this error is not something with terraform.
It is related to AWS error herethere could be some synchronization time after deleting the S3 bucket need to try after sometime.
It could be duplicate of AWS Error Message: A conflicting conditional operation is currently in progress against this resource

Resources