CloudFormation - Terraform integration via SSM - terraform

Some parts of my AWS infrastructure like S3 buckets/CloudFront distributions are deployed with Terraform and some other parts like serverless stuff are done with Serverless framework which is producing CloudFormation templates under the hood.
Changes in Serverless/CloudFormation stacks produces changes in API Gateway endpoint URLs, and running terraform plan against S3/CloudFront shows the difference in origin CloudFront block.
origin {
- domain_name = "qwerty.execute-api.eu-west-1.amazonaws.com"
+ domain_name = "asdfgh.execute-api.eu-west-1.amazonaws.com"
origin_id = "my-origin-id"
origin_path = "/path"
My idea was to write SSM on CloudFormation/Serverless deploy and read it in Terraform to be in sync.
Reading from SSM in serverless.yml is pretty straightforward, but I was unable to find the way to update SSM when deploying CloudFormation Stack. Any ideas?

I found serverless-SSM-publish plugin which is doing the job of writing/updating SSM
just need to add this to serverless.yml
plugins:
- serverless-ssm-publish
custom:
ssmPublish:
enabled: true
params:
- path: /qa/service_name/apigateway_endpoint_url
source: ServiceEndpoint
description: API Gateway endpoint url
secure: false

Related

Reuse the same s3 bucket between stages serverless

Our clients are already registered on our development environment and the management is asking for us to create the production environment without loosing any of the already registered user data.
We are trying to deploy the production environment on ap-southeast-2 and our development environment is already on eu-west-1.
I have made the necessary changes for the deployment to happen on these two regions but the problem is that we are creating cognito and s3 buckets using cloudformation template.
We want to use the same s3 buckets and cognito between these two regions but the problem is when I'm deploying to ap-southeast-2 (production) the stack creation fails because s3 bucket already exists.
Is it possible to reuse the same s3 bucket and cognito between regions and stages? I want the serverless framework to check if these resources exists at the region I choose (in this case eu-west-1). We can't create new buckets because we are at the 100 buckets limit!
Here is the code in how we are creating the s3 buckets. We are using serverless framework with nodejs.
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
I want the serverless framework to check if these resources exists at the region I choose
This is not how Infrastructure as a Code (IaC) works. CloudFormation nor terraform for that matter have any build in tools to "check" if a resource exists or not. The IaC perspective is - if its in a template than only the given template/stack can manage that. There is nothing in between, like it may exist or not.
Having said that, there are ways to re-architect and go around that. The most common ways are:
Since the bucket is common resource, it should be deployed separately from the rest of your stacks, and its name should be passed as an input to the dependant stacks.
Develop a custom resource in the form of a lambda function. The function would use AWS SDK to check for the existence of your buckets, and return that info to your stack for further use.

Terraform invalid arn for aws provider

I'm using AWS Chalice to configure my app, and packaging this to terraform config so that I can combine with the terraform config responsible for the creation of backing services (s3 buckets, elasticache instances etc).
Because chalice is not responsible for creating the S3 bucket itself, only the lambda and the event source mapping it's creating this arn arn:*:s3:::lambda-function-name which is failing terraform aws provider validation:
Error: "source_arn" (arn:*:s3:::fetchbb--warehouse-sync--dropbox-quickbase) is an invalid ARN:
invalid partition value (expecting to match regular expression: ^aws(-[a-z]+)*$)
This is the config that chalice is producting:
"aws_lambda_permission": {
"lambda-function-name-s3event": {
"statement_id": "lambda-function-name-s3event",
"action": "lambda:InvokeFunction",
"function_name": "lambda-function-name",
"principal": "s3.amazonaws.com",
"source_arn": "arn:*:s3:::lambda-function-name"
},
...
}
I'm trying to work out if this is a legitimate arn. Is the issue with the terraform aws provider validation, or with the config that chalice is packaging?

How to make aws_cloudwatch_event_rule with terraform and localstack?

I am using terraform(terraform) and localstack(localstack) and trying to create a aws_cloudwatch_event_rule. I get an error:
Error: Updating CloudWatch Event Rule failed: UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: 2d0671b9-cb55-4872-8e8c-82e26f4336cb
Im not sure why im getting this error because this works to create the resource in AWS but not on localstack 🤷‍♂️. Does anybody have any suggestions as to how to fix this? Thanks.
Its a large terraform project so I cant share all the code. This is the relevant section.
resource "aws_cloudwatch_event_rule" "trigger" {
name = "trigger-event"
description = "STUFF"
schedule_expression = "cron(0 */1 * * ? *)"
}
resource "aws_cloudwatch_event_target" "trigger_target" {
rule = "${aws_cloudwatch_event_rule.trigger.name}"
arn = "${trigger.arn}"
}
I realize this is an old question, but I just ran into this problem. I wanted to share what resolved it for me, in case it helps others who end up here. This works for me with terraform 0.12 (should work for 0.13 as well) and AWS provider 3.x.
When you get the The security token included in the request is invalid error, it usually means terraform attempted to perform the operation against real AWS rather than localstack.
The following should resolve the issue with creating CloudWatch Event rules.
Make sure you're running the events service in localstack. It's this service, and not cloudwatch, that provides the CloudWatch Events interface. E.g. if you're running localstack from the command line:
SERVICES=cloudwatch,events localstack start
Make sure the AWS provider in the terraform config is pointed to localstack. Like from step (1), we need to make sure to have a setting specifically for CloudWatch Events. In the AWS provider config, that's cloudwatchevents.
provider "aws" {
version = "~> 3.0"
profile = "<profile used for localstack>"
region = "<region configured for localstack>"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
# Update the urls below if you've e.g. customized localstack's port
cloudwatch = "http://localhost:4566"
cloudwatchevents = "http://localhost:4566"
iam = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
Now, the terraform apply should successfully run against localstack.
One more gotcha to be aware of is that localstack currently doesn't persist CloudWatch or CloudWatch Events data, even if you enable persistence. So when you kill or restart localstack, any CloudWatch Events rules will be lost.

Serverless Deploy to Different AWS Accounts

I have serverless project with bunch of terraform resource creation.
I would like to deploy Lambda functions in one AWS account and as part of serverless deploy, it can create API Gateway endpoints (pointing to lambda functions).
I would like to API Gateway to be created in Another AWS Account. Is it possible do purely in serverless. If not what are the options ?
Try an EventBridge approach. You should be able to access functions securely, without creating complex dual-account ARNs, Policy Scripts and permissions:
const AWS = require('aws-sdk');
function notifyMarketingTeam(email) {
const eventBridge = new AWS.EventBridge({key:XXXXX,secret:XXXX region: 'us-east-1' });
return eventBridge.putEvents({
Entries: [
{
EventBusName: 'marketing',
Source: 'acme.newsletter.campaign',
DetailType: 'UserSignUp',
Detail: `{ "E-Mail": "${email}" }`,
},
]
}).promise()
}

Serverless Framework: how to deploy with CloudFormation?

I am new to the serverless framework. Well, at least to the latest version, which depends heavily on CloudFormation.
I installed the framework globally on my computer using:
npm install -g serverless
I then created a service using:
serverless create --template aws-nodejs --path myService
Finally, I ran:
serverless deploy
Everything seems to deploy normally, it shows no error in the terminal.
I can see the CloudFormation files in a newly created, dedicated S3 bucket.
However, I cannot find the default hello Lambda function in the AWS Lambda console.
What am I missing? Are the CloudFormation files not supposed to create Lambda functions upon deployment?
The reason the default hello Lambda function is not listed in the AWS Lambda console is because your Lambda function was uploaded to the default region (us-east-1), while the Lambda console displays the functions of another region.
To set the correct region for your functions, you can use the region field of the serverless.yml file.
Make sure the region property is directly under the provider section. With 2/4 spaces indent. Like this:
provider:
region: eu-west-1
Alternatively, you can specify the region at deployment time, like so:
sls deploy --region eu-west-1
Duh, I had made a super silly mistake:
I did not properly set the AWS region
So, I was looking for a lambda function in the wrong region: of course it could not be found!
Before deploying, one must make sure to set the correct region
UPDATE Well actually, I had set the region in serverless.yml by providing:
region: eu-west-1
However, for some reason the default region was not overwritten, and the function was deployed to the wrong region. Odd, that.
Anyway, one easy way around this issue is to provide the region at deployment time:
sls deploy --region eu-west-1

Resources