Terraform invalid arn for aws provider - terraform

I'm using AWS Chalice to configure my app, and packaging this to terraform config so that I can combine with the terraform config responsible for the creation of backing services (s3 buckets, elasticache instances etc).
Because chalice is not responsible for creating the S3 bucket itself, only the lambda and the event source mapping it's creating this arn arn:*:s3:::lambda-function-name which is failing terraform aws provider validation:
Error: "source_arn" (arn:*:s3:::fetchbb--warehouse-sync--dropbox-quickbase) is an invalid ARN:
invalid partition value (expecting to match regular expression: ^aws(-[a-z]+)*$)
This is the config that chalice is producting:
"aws_lambda_permission": {
"lambda-function-name-s3event": {
"statement_id": "lambda-function-name-s3event",
"action": "lambda:InvokeFunction",
"function_name": "lambda-function-name",
"principal": "s3.amazonaws.com",
"source_arn": "arn:*:s3:::lambda-function-name"
},
...
}
I'm trying to work out if this is a legitimate arn. Is the issue with the terraform aws provider validation, or with the config that chalice is packaging?

Related

CloudFormation - Terraform integration via SSM

Some parts of my AWS infrastructure like S3 buckets/CloudFront distributions are deployed with Terraform and some other parts like serverless stuff are done with Serverless framework which is producing CloudFormation templates under the hood.
Changes in Serverless/CloudFormation stacks produces changes in API Gateway endpoint URLs, and running terraform plan against S3/CloudFront shows the difference in origin CloudFront block.
origin {
- domain_name = "qwerty.execute-api.eu-west-1.amazonaws.com"
+ domain_name = "asdfgh.execute-api.eu-west-1.amazonaws.com"
origin_id = "my-origin-id"
origin_path = "/path"
My idea was to write SSM on CloudFormation/Serverless deploy and read it in Terraform to be in sync.
Reading from SSM in serverless.yml is pretty straightforward, but I was unable to find the way to update SSM when deploying CloudFormation Stack. Any ideas?
I found serverless-SSM-publish plugin which is doing the job of writing/updating SSM
just need to add this to serverless.yml
plugins:
- serverless-ssm-publish
custom:
ssmPublish:
enabled: true
params:
- path: /qa/service_name/apigateway_endpoint_url
source: ServiceEndpoint
description: API Gateway endpoint url
secure: false

Terraform cloud : Import existing resource

I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.
Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"
Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file

How can I upload terraform state to s3 bucket?

I have a project whose infra is managed by terraform. I'd like to push the state to a s3 bucket so other teams can use the state. Below is the backend configuration:
terraform {
backend "s3" {
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
when I run terraform init I got below error:
AccessDenied: Access Denied
status code: 403, request id: 107E6007C9C64805, host id: kWASxeq1msxvGPZIKdi+7htg3yncMFxW9PQuXdC8ouwsEHMhx8ZPu6dKGUGWzDtblC6WRg1P1ew=
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
It seems that terraform tries to load state from s3 bucket rather than push to s3 bucket. How can I configure terraform to push state to s3?
I have configured aws profile on a tf file:
provider "aws" {
region = "ap-southeast-2"
profile = "me"
}
The credential for the current user has admin permission on the bucket.
I was facing the same issue and the found like the bucket mentioned in the backend.tf file was not created in my aws console. Hence I create the bucket with the same bucket name mentioned in the backend.tf file and it worked for me.
For further readers:
AWS credentials can be provided As #Thiago Arrais mentioned
Another way to provide credentials in backend block is to define profile:
terraform {
backend "s3" {
profile = "me" <-- aws profile
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
And your ~/.aws/credentails file has profile me with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY defined in it as follows:
[me]
AWS_ACCESS_KEY_ID = access_key_value
AWS_SECRET_ACCESS_KEY = secret_key_value
I had exact same problem. When terraform {backend "s3" {}} is defined then that block is evaluated before provider "aws" {} block. That's why backend cannot find credentials info defined in provider block.
You're not providing the S3 credentials in the backend block. You'll need to set them there (access_key and secret_key parameters) or via environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
You'll also need to make sure that the bucket exists and that these credentials do have access to it.
By the way, you don't need an AWS provider block. The S3 backend is usable even if you don't manage AWS resources in your Terraform config.
For me I was having different aws region in backend.tf than where my bucket was.

Terraform profile field usage in AWS provider

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Can't access S3 bucket from within Fargate container (Bad Request and unable to locate credentials)

I created a private s3 bucket and a fargate cluster with a simple task that attempts to read from that bucket using python 3 and boto3. I've tried this on 2 different docker images and on one I get a ClientError from boto saying HeadObject Bad request (400) and the other I get NoCredentialsError: Unable to locate credentials.
The only real different in the images is that the one saying bad request is being run normally and the other is being run manually by me via ssh to the task container. So I'm not sure why one image is saying "bad request" and the other "unable to locate credentials".
I have tried a couple different IAM policies, including (terraform) the following policies:
data "aws_iam_policy_document" "access_s3" {
statement {
effect = "Allow"
actions = ["s3:ListBucket"]
resources = ["arn:aws:s3:::bucket_name"]
}
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:GetObjectVersionTagging",
]
resources = ["arn:aws:s3:::bucket_name/*"]
}
}
Second try:
data "aws_iam_policy_document" "access_s3" {
statement {
effect = "Allow"
actions = ["s3:*"]
resources = ["arn:aws:s3:::*"]
}
}
And the final one I tried was a build in policy:
resource "aws_iam_role_policy_attachment" "access_s3" {
role = "${aws_iam_role.ecstasks.name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
the bucket definition is very simple:
resource "aws_s3_bucket" "bucket" {
bucket = "${var.bucket_name}"
acl = "private"
region = "${var.region}"
}
Code used to access s3 bucket:
try:
s3 = boto3.client('s3')
tags = s3.head_object(Bucket='bucket_name', Key='filename')
print(tags['ResponseMetadata']['HTTPHeaders']['etag'])
except ClientError:
traceback.print_exc()
No matter what I do, I'm unable to use boto3 to access AWS resources from within a Fargate container task. I'm able to access the same s3 bucket with boto3 on an EC2 instance without providing any kind of credentials and only using the IAM roles/policies. What am I doing wrong? Is it not possible to access AWS resources in the same way from a Fargate container?
Forgot to mention that I am assigning the IAM roles to the task definition execution policy and task policy.
UPDATE: It turns out that the unable to find credentials error I was having is a red herring. The reason I could not get the credentials was because my direct ssh session did not have the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable set.
AWS Fargate will inject an environment variable named AWS_CONTAINER_CREDENTIALS_RELATIVE_URI on your behalf which contains a url to what boto should use for grabbing API access credentials. So the Bad request error is the one I'm actually getting and need help resolving. I checked my environment variables inside the container and the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI value is being set by Fargate.
I struggled quite a bit with this issue and constantly having AWS_CONTAINER_CREDENTIALS_RELATIVE_URI wrongly set to None, until I added a custom task role in addition to my current task execution role.
1) The task execution role is responsible for having access to the container in ECR and giving access to run the task itself, while 2) the task role is responsible for your docker container making API requests to other authorized AWS services.
1) For my task execution role I'm using AmazonECSTaskExecutionRolePolicy with the following JSON;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
2) I finally got rid of the NoCredentialsError: Unable to locate credentials when I added a task role in addition to the task execution role, for instance, responsible of reading from a certain bucket;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
In summary; make sure to both have a role for 1) executionRoleArn for access to run the task and 2) taskRoleArn for access to make API requests to authorized AWS services set in your task definition.
To allow Amazon S3 read-only access for your container instance role
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles.
Choose the IAM role to use for your container instances (this role is likely titled ecsInstanceRole). For more information, see Amazon ECS Container Instance IAM Role.
Under Managed Policies, choose Attach Policy.
On the Attach Policy page, for Filter, type S3 to narrow the policy results.
Select the box to the left of the AmazonS3ReadOnlyAccess policy and choose Attach Policy.
You should need an IAM Role to access from your ecs-task to your S3 bucket.
resource "aws_iam_role" "AmazonS3ServiceForECSTask" {
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs-tasks.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
data "aws_iam_policy_document" "bucket_policy" {
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
}
actions = [
"s3:ListBucket",
]
resources = [
"arn:aws:s3:::${var.your_bucket_name}",
]
}
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
}
actions = [
"s3:GetObject",
]
resources = [
"arn:aws:s3:::${var.your_bucket_name}/*",
]
}
}
You should need add your IAM Role in task_role_arn of your task definition.
resource "aws_ecs_task_definition" "_ecs_task_definition" {
task_role_arn = aws_iam_role.AmazonS3ServiceForECSTask.arn
execution_role_arn = aws_iam_role.ECS-TaskExecution.arn
family = "${var.family}"
network_mode = var.network_mode[var.launch_type]
requires_compatibilities = var.requires_compatibilities
cpu = var.task_cpu[terraform.workspace]
memory = var.task_memory[terraform.workspace]
container_definitions = module.ecs-container-definition.json
}
ECS Fargate task not applying role
After countless hours of digging this parameter finally solved the issue for me:
auto_assign_public_ip = true inside a network_configuration block on ecs service.
Turns out my tasks ran by these service didn't have IP assigned and thus no connection to outside world.
Boto3 has a credential lookup route: https://boto3.readthedocs.io/en/latest/guide/configuration.html. When you use AWS provided images to create your EC2 instance, the instance pre-install the aws command and other AWS credential environmental variables. However, Fargate is only a container. You need to manually inject AWS credentials to the container. One quick solution is to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to the fargate container.

Resources