Cloudformation to Terraform conversion of Deployment Group - terraform

I am converting the following Cloudformation code to Terraform. I have converted the most of the following code except the first few lines upto TriggerConfigurations (Type, DependsOn, Properties):
Cloudformation:
CodeDeployDeploymentGroup:
Type: 'AWS::CodeDeploy::DeploymentGroup'
DependsOn:
- CodeDeployApplication
- CodeDeployRole
Properties:
ApplicationName: !Ref CodeDeployApplication
ServiceRoleArn: !GetAtt
- CodeDeployRole
- Arn
TriggerConfigurations:
- TriggerEvents:
- DeploymentStart
- DeploymentSuccess
- DeploymentFailure
- DeploymentStop
- DeploymentRollback
- DeploymentReady
TriggerName: SlackTarget
TriggerTargetArn: !ImportValue
'Fn::Sub': '${EnvName}CDNotificationTopicARN'
DeploymentGroupName: !Join
- '-'
- - !Ref AppName
- CodeDeploymentGroup
DeploymentConfigName: CodeDeployDefault.LambdaAllAtOnce
DeploymentStyle:
DeploymentOption: !Ref DGDeploymentOption
DeploymentType: !Ref DGDeploymentType
Terraform:
resource "aws_codedeploy_deployment_group" "CodeDeployDeploymentGroup" {
app_name = aws_codedeploy_app.example.name
deployment_config_name = "CodeDeployDefault.LambdaAllAtOnce"
deployment_group_name = "${var.AppName}-CodeDeploymentGroup"
service_role_arn = aws_iam_role.example.arn
deployment_style {
deployment_option = "${var.DGDeploymentOption}"
deployment_type = "${var.DGDeploymentType}"
}
trigger_configuration {
trigger_events = ["DeploymentStart","DeploymentSuccess","DeploymentFailure","DeploymentStop","DeploymentRollback","DeploymentReady"]
trigger_name = "SlackTarget"
trigger_target_arn = ["${EnvName}CDNotificationTopicARN"]
}
}
Please let me know how to convert these lines of code to terraform.

I coded the properties as below:
resource "aws_codedeploy_deployment_group" "CodeDeployDeploymentGroup" {
app_name = ${var.CodeDeployApplication}
deployment_config_name = "CodeDeployDefault.LambdaAllAtOnce"
deployment_group_name = "${var.AppName}-CodeDeploymentGroup"
service_role_arn = aws_iam_role.CodeDeployRole.arn
Let me know if this is right.

Related

How to provide expression as value inside an ssm document?

I would like to add a server to an ausostaling-group using SSM document, if the group has n instances running - i want to have (n+1).
Since this stack is managed by cloudformation, i just need to increase the 'DesiredCapacity' variable and update the stack. so i created a document with 2 steps:
get the current value of 'DesiredCapacity'
update stack with value of 'DesiredCapacity' + 1
I didnt find a way to express this simple operation, i guess im doing something wrong ...
SSM Document:
schemaVersion: '0.3'
parameters:
cfnStack:
description: 'The cloudformation stack to be updated'
type: String
mainSteps:
- name: GetDesiredCount
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ cfnStack }}'
outputs:
- Selector: '$.Stacks[0].Outputs.DesiredCapacity'
Type: String
Name: DesiredCapacity
- name: UpdateCloudFormationStack
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: UpdateStack
StackName: '{{ cfnStack }}'
UsePreviousTemplate: true
Parameters:
- ParameterKey: WebServerCapacity
ParameterValue: 'GetDesiredCount.DesiredCapacity' + 1 ### ERROR
# ParameterValue: '{{GetDesiredCount.DesiredCapacity}}' + 1 ### ERROR (trying to concat STR to INT)
# ParameterValue: '{{ GetDesiredCount.DesiredCapacity + 1}}' ### ERROR
There is a way to do calculation inside an SSM document using python runtime.
The additional python step do the following:
Python runtime get variables via the the 'InputPayload' property
The 'current' (str) key added to the event object
The python function script_handler called
The 'current' extracted using event['current']
Converting string to int and adding 1
return a dictionary with the 'desired_capacity' key and value as string
expose the output ($.Payload.desired_capacity referred to the 'desired_capacity' of the returned dictionary)
schemaVersion: '0.3'
parameters:
cfnStack:
description: 'The cloudformation stack to be updated'
type: String
mainSteps:
- name: GetDesiredCount
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ cfnStack }}'
outputs:
- Selector: '$.Stacks[0].Outputs.DesiredCapacity'
Type: String
Name: DesiredCapacity
- name: Calculate
action: 'aws:executeScript'
inputs:
Runtime: python3.6
Handler: script_handler
Script: |-
def script_handler(events, context):
desired_capacity = int(events['current']) + 1
return {'desired_capacity': str(desired_capacity)}
InputPayload:
current: '{{ GetDesiredCount.DesiredCapacity }}'
outputs:
- Selector: $.Payload.desired_capacity
Type: String
Name: NewDesiredCapacity
- name: UpdateCloudFormationStack
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: UpdateStack
StackName: '{{ cfnStack }}'
UsePreviousTemplate: true
Parameters:
- ParameterKey: WebServerCapacity
ParameterValue: '{{ Calculate.NewDesiredCapacity}}'

How can I pass map variable to Azure Devops pipeline job?

I'm learning Azure Devops pipelines, my first project is to create simple vnet with subnet using Terraform. I figured how to pass simple key-value variables, but problem is how to pass for example list of strings or more important, map variable from Terraform.
I'm using it to create subnets using each key - each value loop.
There are files that I'm using, I'm getting error about syntax in pipeline.yaml for VirtualNetworkAddressSpace and VirtualNetworkSubnets values.
Can you please help me with this one?
variables.tf
variable RG_Name {
type = string
#default = "TESTMS"
}
variable RG_Location {
type = string
#default = "West Europe"
}
variable VirtualNetworkName {
type = string
#default = "TESTSS"
}
variable VirtualNetworkAddressSpace {
type = list(string)
#default = ["10.0.0.0/16"]
}
variable VirtualNetworkSubnets {
type = map
#default = {
#"GatewaySubnet" = "10.0.255.0/27"
#}
}
dev.tfvars
RG_Name = __rgNAME__
RG_Location = __rgLOCATION__
VirtualNetworkName = __VirtualNetworkName__
VirtualNetworkAddressSpace = __VirtualNetworkAddressSpace__
VirtualNetworkSubnets = __VirtualNetworkSubnets__
pipeline.yaml
resources:
repositories:
- repository: self
trigger:
- feature/learning
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'rgNAME'
value: 'skwiera-rg'
- name: 'rgLOCATION'
value: 'West Europe'
- name: 'VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'VirtualNetworkAddressSpace'
value: ['10.0.0.0/16']
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
environment: 'DEV'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace Terraform variables'
inputs:
targetFiles: '**/*.tfvars'
tokenPrefix: '__'
tokenSuffix: '__'
- task: TerraformInstaller#0
displayName: "Install Terraform"
inputs:
terraformVersion: '1.0.8'
- task: TerraformTaskV2#2
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'skwieralearning'
backendAzureRmResourceGroupName: 'skwiera-learning-rg'
backendAzureRmStorageAccountName: 'skwieralearningtfstate'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'dev.tfstate'
- task: TerraformTaskV2#2
displayName: 'Terraform Validate'
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV2#2
displayName: "Terraform Plan"
inputs:
provider: 'azurerm'
command: 'plan'
environmentServiceNameAzureRM: 'skwieralearning'
- task: TerraformTaskV2#2
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
environmentServiceNameAzureRM: 'skwieralearning'
The Azure Devops pipeline.yaml file is expecting the job variable's value to be a string but if you use:
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
Then the YAML parser sees that as a nested mapping under the value key as YAML supports both key1: value and {key: value} syntax for mappings.
You can avoid it being read as a mapping by wrapping it in quotes so that it's read as a string literal:
- name: 'VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"
Separately you can avoid the qetza.replacetokens.replacetokens-task.replacetokens#3 step and the tokenised values in dev.tfvars by prefixing the environment variables with TF_VAR_:
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'TF_VAR_rgNAME'
value: 'skwiera-rg'
- name: 'TF_VAR_rgLOCATION'
value: 'West Europe'
- name: 'TF_VAR_VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'TF_VAR_VirtualNetworkAddressSpace'
value: "['10.0.0.0/16']"
- name: 'TF_VAR_VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"

create log group and log stream using serverless framework

I have the following Terraform code. How can I implement the same in Serverless framework?
resource "aws_cloudwatch_log_group" "abc" {
name = logGroupName
tags = tags
}
resource "aws_cloudwatch_log_stream" "abc" {
depends_on = ["aws_cloudwatch_log_group.abc"]
name = logStreamName
log_group_name = logGroupName
}
My Serverless.yml file looks more like this. Basically I need to create a Log Group and Log Stream with names.
provider:
name: aws
runtime: python3.7
cfnRole: arn:cfnRole
iamRoleStatements:
- Effect: 'Allow'
Action:
- lambda:InvokeFunction
Resource: 'arn....'
functions:
handle:
handler: handler.handle
events:
- schedule:
rate: rate (2 hours)
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
In your Resources you have to add AWS::Logs::LogGroup and AWS::Logs::LogStream.
But tags on AWS::Logs::LogGroup are not supported.

RDS, AWS Lambda, App client - type of set up

I've been racking my brain for days and can't find a solution.
I have an app written in python and want to use the variables which the user has input via text, png and checkbox in to a database but securely using AWS lambda instead of hard coding the db in to the app.
I've set up all the instances VPC with DBs inside them. I can create a deployment .py which can be invoked by the AWS lambda but how can I use the client to provide variables for this deployment? Or is there another way to do this?
Many thanks,
p.s the app also uses cognito for auth (using warrant).
Here's an example of how to use Secrets Manager to hide DB connection info from your code. I setup RDS & secret manager using a Cloudformation script.
DBSecret:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Sub '${AWS::StackName}-${MasterUsername}'
Description: DB secret
GenerateSecretString:
SecretStringTemplate: !Sub '{"username": "${MasterUsername}"}'
GenerateStringKey: "password"
PasswordLength: 16
ExcludeCharacters: '"#/\'
DBInstance:
Type: AWS::RDS::DBInstance
DependsOn: DBSecret
DeletionPolicy: Delete
Properties:
DBInstanceClass: !FindInMap [InstanceSize, !Ref EnvironmentSize, DB]
StorageType: !FindInMap [InstanceSize, !Ref EnvironmentSize, TYPE]
AllocatedStorage: !FindInMap [InstanceSize, !Ref EnvironmentSize, STORAGE]
AutoMinorVersionUpgrade: true
AvailabilityZone: !Select [0, !Ref AvailabilityZones ]
BackupRetentionPeriod: !Ref BackupRetentionPeriod
CopyTagsToSnapshot: false
DBInstanceIdentifier: !Ref AWS::StackName
DBSnapshotIdentifier: !If [isRestore, !Ref SnapToRestore, !Ref "AWS::NoValue"]
DBSubnetGroupName: !Ref DBSubnets
DeleteAutomatedBackups: true
DeletionProtection: false
EnableIAMDatabaseAuthentication: false
EnablePerformanceInsights: false
Engine: postgres
EngineVersion: 10.5
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, '::username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, '::password}}' ]]
MonitoringInterval: 0
MultiAZ: !If [isMultiAZ, true, false]
PreferredBackupWindow: '03:00-03:30'
PreferredMaintenanceWindow: 'mon:04:00-mon:04:30'
PubliclyAccessible: false
StorageEncrypted: !If [isMicro, false, true]
VPCSecurityGroups: !Ref VPCSecurityGroups
DBSecretAttachment:
Type: AWS::SecretsManager::SecretTargetAttachment
Properties:
SecretId: !Ref DBSecret
TargetId: !Ref DBInstance
TargetType: 'AWS::RDS::DBInstance'
The above script creates an RDS instance, a DB secret with all connection information. Please note the password is created by the script & stored in the secret.
Sample code in Nodejs to retrieve secret value with known secret id
const params = {
SecretId: <secret id>,
};
secretsmanager.getSecretValue(params, async function(err, data) {
if (err)
console.log(err);
else {
console.log(data.SecretString);
const data1= JSON.parse(data.SecretString);
dbPort = data1.port;
dbUsername = data1.username;
dbPassword = data1.password;
dbName = data1.dbname;
dbEndpoint = data1.host;
}
});
Here's what your RDS security group should look like.
where source is the security group of your lambda.
Hope this helps.

AWS Cloudformation to enable Performance Insights

Does anyone know if enabling Performance Insights (for AWS Aurora) is available in CloudFormation?
Its available in Terraform as performance_insights_enabled, but I am not able to find equivalent in CloudFormation.
Thanks
Support for enabling Performance Insights via CloudFormation is now available: https://aws.amazon.com/about-aws/whats-new/2018/11/aws-cloudformation-coverage-updates-for-amazon-secrets-manager--/
Not currently possible with native CFN, but since you can execute custom Lambda code inside CFN templates (i.e. Type: 'Custom::EnablePerformanceInsights'), you can do something like this in your template:
EnablePerformanceInsights:
Type: 'Custom::EnablePerformanceInsights'
Properties:
ServiceToken: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:enable-performance-insights-${LambdaStackGuid}'
DBInstanceId: !Ref 'RDSInstance'
PerformanceInsightsKMSKeyId: !Ref 'DefaultKMSKeyArn'
PerformanceInsightsRetentionPeriod: 7
Your function and role definitions are likely to be:
ModifyRDSInstanceLambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- 'lambda.amazonaws.com'
Action:
- 'sts:AssumeRole'
Path: '/'
Policies:
- PolicyName: 'AmazonLambdaServicePolicy'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
- 'rds:*'
- 'kms:*'
Resource: '*'
EnablePerformanceInsightsLambda:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: !Join [ '-', [ 'enable-performance-insights', !Select [ 2, !Split [ '/', !Ref 'AWS::StackId' ]]]]
Handler: 'enable-performance-insights.lambda_handler'
Code:
S3Bucket: !Ref 'S3Bucket'
S3Key: !Sub 'lambda-functions/enable-performance-insights.zip'
Runtime: python2.7
Role: !Ref 'ModifyRDSInstanceLambdaRole'
Description: 'Enable RDS Performance Insights.'
Timeout: 300
The function code would import boto3 to handle AWS API:
import cfnresponse # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
import boto3
import os
from retrying import retry
from uuid import uuid4
resource_id = str(uuid4())
region = os.getenv('AWS_REGION')
profile = os.getenv('AWS_PROFILE')
if profile:
session = boto3.session.Session(profile_name=profile)
boto3.setup_default_session(profile_name=profile)
client = boto3.client('rds', region_name=region)
#retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_delay=300000)
def enable_performance_insights(DBInstanceId=None, PerformanceInsightsKMSKeyId=None, PerformanceInsightsRetentionPeriod=None):
response = client.modify_db_instance(
DBInstanceIdentifier=DBInstanceId,
EnablePerformanceInsights=True,
PerformanceInsightsKMSKeyId=PerformanceInsightsKMSKeyId,
PerformanceInsightsRetentionPeriod=int(PerformanceInsightsRetentionPeriod),
ApplyImmediately=True
)
assert response
return response
#retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_delay=300000)
def disable_performance_insights(DBInstanceId=None):
response = client.modify_db_instance(
DBInstanceIdentifier=DBInstanceId,
EnablePerformanceInsights=False,
ApplyImmediately=True
)
assert response
return response
def lambda_handler(event, context):
print(event, context, boto3.__version__)
try:
DBInstanceIds = event['ResourceProperties']['DBInstanceId'].split(',')
except:
DBInstanceIds = []
PerformanceInsightsKMSKeyId = event['ResourceProperties']['PerformanceInsightsKMSKeyId']
PerformanceInsightsRetentionPeriod = event['ResourceProperties']['PerformanceInsightsRetentionPeriod']
try:
ResourceId = event['PhysicalResourceId']
except:
ResourceId = resource_id
responseData = {}
if event['RequestType'] == 'Delete':
try:
for DBInstanceId in DBInstanceIds:
response = disable_performance_insights(DBInstanceId=DBInstanceId)
print(response)
except Exception as e:
print(e)
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, physicalResourceId=ResourceId)
return
try:
for DBInstanceId in DBInstanceIds:
response = enable_performance_insights(
DBInstanceId=DBInstanceId,
PerformanceInsightsKMSKeyId=PerformanceInsightsKMSKeyId,
PerformanceInsightsRetentionPeriod=PerformanceInsightsRetentionPeriod
)
print(response)
except Exception as e:
print(e)
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, physicalResourceId=ResourceId)
(copied/redacted from working stacks)

Resources