Does anyone know if enabling Performance Insights (for AWS Aurora) is available in CloudFormation?
Its available in Terraform as performance_insights_enabled, but I am not able to find equivalent in CloudFormation.
Thanks
Support for enabling Performance Insights via CloudFormation is now available: https://aws.amazon.com/about-aws/whats-new/2018/11/aws-cloudformation-coverage-updates-for-amazon-secrets-manager--/
Not currently possible with native CFN, but since you can execute custom Lambda code inside CFN templates (i.e. Type: 'Custom::EnablePerformanceInsights'), you can do something like this in your template:
EnablePerformanceInsights:
Type: 'Custom::EnablePerformanceInsights'
Properties:
ServiceToken: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:enable-performance-insights-${LambdaStackGuid}'
DBInstanceId: !Ref 'RDSInstance'
PerformanceInsightsKMSKeyId: !Ref 'DefaultKMSKeyArn'
PerformanceInsightsRetentionPeriod: 7
Your function and role definitions are likely to be:
ModifyRDSInstanceLambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- 'lambda.amazonaws.com'
Action:
- 'sts:AssumeRole'
Path: '/'
Policies:
- PolicyName: 'AmazonLambdaServicePolicy'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
- 'rds:*'
- 'kms:*'
Resource: '*'
EnablePerformanceInsightsLambda:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: !Join [ '-', [ 'enable-performance-insights', !Select [ 2, !Split [ '/', !Ref 'AWS::StackId' ]]]]
Handler: 'enable-performance-insights.lambda_handler'
Code:
S3Bucket: !Ref 'S3Bucket'
S3Key: !Sub 'lambda-functions/enable-performance-insights.zip'
Runtime: python2.7
Role: !Ref 'ModifyRDSInstanceLambdaRole'
Description: 'Enable RDS Performance Insights.'
Timeout: 300
The function code would import boto3 to handle AWS API:
import cfnresponse # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
import boto3
import os
from retrying import retry
from uuid import uuid4
resource_id = str(uuid4())
region = os.getenv('AWS_REGION')
profile = os.getenv('AWS_PROFILE')
if profile:
session = boto3.session.Session(profile_name=profile)
boto3.setup_default_session(profile_name=profile)
client = boto3.client('rds', region_name=region)
#retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_delay=300000)
def enable_performance_insights(DBInstanceId=None, PerformanceInsightsKMSKeyId=None, PerformanceInsightsRetentionPeriod=None):
response = client.modify_db_instance(
DBInstanceIdentifier=DBInstanceId,
EnablePerformanceInsights=True,
PerformanceInsightsKMSKeyId=PerformanceInsightsKMSKeyId,
PerformanceInsightsRetentionPeriod=int(PerformanceInsightsRetentionPeriod),
ApplyImmediately=True
)
assert response
return response
#retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_delay=300000)
def disable_performance_insights(DBInstanceId=None):
response = client.modify_db_instance(
DBInstanceIdentifier=DBInstanceId,
EnablePerformanceInsights=False,
ApplyImmediately=True
)
assert response
return response
def lambda_handler(event, context):
print(event, context, boto3.__version__)
try:
DBInstanceIds = event['ResourceProperties']['DBInstanceId'].split(',')
except:
DBInstanceIds = []
PerformanceInsightsKMSKeyId = event['ResourceProperties']['PerformanceInsightsKMSKeyId']
PerformanceInsightsRetentionPeriod = event['ResourceProperties']['PerformanceInsightsRetentionPeriod']
try:
ResourceId = event['PhysicalResourceId']
except:
ResourceId = resource_id
responseData = {}
if event['RequestType'] == 'Delete':
try:
for DBInstanceId in DBInstanceIds:
response = disable_performance_insights(DBInstanceId=DBInstanceId)
print(response)
except Exception as e:
print(e)
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, physicalResourceId=ResourceId)
return
try:
for DBInstanceId in DBInstanceIds:
response = enable_performance_insights(
DBInstanceId=DBInstanceId,
PerformanceInsightsKMSKeyId=PerformanceInsightsKMSKeyId,
PerformanceInsightsRetentionPeriod=PerformanceInsightsRetentionPeriod
)
print(response)
except Exception as e:
print(e)
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, physicalResourceId=ResourceId)
(copied/redacted from working stacks)
Related
I have the following Terraform code. How can I implement the same in Serverless framework?
resource "aws_cloudwatch_log_group" "abc" {
name = logGroupName
tags = tags
}
resource "aws_cloudwatch_log_stream" "abc" {
depends_on = ["aws_cloudwatch_log_group.abc"]
name = logStreamName
log_group_name = logGroupName
}
My Serverless.yml file looks more like this. Basically I need to create a Log Group and Log Stream with names.
provider:
name: aws
runtime: python3.7
cfnRole: arn:cfnRole
iamRoleStatements:
- Effect: 'Allow'
Action:
- lambda:InvokeFunction
Resource: 'arn....'
functions:
handle:
handler: handler.handle
events:
- schedule:
rate: rate (2 hours)
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
In your Resources you have to add AWS::Logs::LogGroup and AWS::Logs::LogStream.
But tags on AWS::Logs::LogGroup are not supported.
I have a use case where I need to call one API every 2 minutes to check the updates and store the result into the database. For the same, I am trying the AWS Schedule lambda function using AWS SAM CLI using Python but my Lambda function is not getting triggered. Below is my code:
app.py
def lambda_schedule(event, context):
print("Lambda Schedule event started Successfully......")
print("Lambda function ARN:", context.invoked_function_arn)
print("CloudWatch log stream name:", context.log_stream_name)
print("CloudWatch log group name:", context.log_group_name)
print("Lambda Request ID:", context.aws_request_id)
print("Lambda Schedule event ended Successfully......")
template.yaml
CronLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_schedule
Runtime: python3.8
Events:
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- "aws.events"
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
Description: "PullBalanceScheduleRule"
ScheduleExpression: "rate(2 minutes)"
State: "ENABLED"
Targets:
-
Arn: !GetAtt CronLambdaFunction.Arn
Id: "CronLambdaFunction"
PermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref "PullBalanceScheduleRule"
Action: "lambda:InvokeFunction"
Principal: "events.amazonaws.com"
SourceArn:
-
Arn: !GetAtt PullBalanceScheduleRule.Arn
Id: "PullBalanceScheduleRule"
can anyone tell me, what is wrong in my code OR what is missing in my code?
I got my mistakes. Mistakes were in the Permission sections. I am posting here correct yaml configurations so it could help to someone who are new to AWS SAM CLI.
app.py
def lambda_schedule(event, context):
print("Lambda Schedule event started Successfully......")
print("Lambda function ARN:", context.invoked_function_arn)
print("CloudWatch log stream name:", context.log_stream_name)
print("CloudWatch log group name:", context.log_group_name)
print("Lambda Request ID:", context.aws_request_id)
print("Lambda Schedule event ended Successfully......")
template.yaml
CronLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_schedule
Runtime: python3.8
Events:
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- "aws.events"
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
Description: "PullBalanceScheduleRule"
ScheduleExpression: "rate(2 minutes)"
State: "ENABLED"
Targets:
-
Arn: !GetAtt CronLambdaFunction.Arn
Id: "CronLambdaFunction"
PermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref "CronLambdaFunction"
Action: "lambda:InvokeFunction"
Principal: "events.amazonaws.com"
SourceArn:
Fn::GetAtt:
- "PullBalanceScheduleRule"
- "Arn"
I have a serverless resource for SNS topic in the resources section in the serverless.yml something like this,
resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: SNS Topic
TopicName: ${self:service}-${self:provider.stage}-Topic
When I am trying to bind this SNS topic to my lambda event as given below, lambda is not triggering by the SNS event. When I check AWS console for that lambda function the SNS event bound with wrong ARN value.
Function:
handler: src/sample/file.lambdaHandler
role: s3FullAccessRole
events: SNSTopic
Properties:
Policies:
- AWSLambdaExecute
- Statement:
- Effect: Allow
Action:
- 'lambda:InvokeFunction'
I have tried with changing event with all the different ways mentioned in here, https://serverless.com/framework/docs/providers/aws/events/sns/. The only way I found is to hard code the SNS Topic ARN value in the lambda event, which is not ideal for my situation.
Any help is really appreciated.
You could actually create a variable in custom with the arn of the sns topic
custom:
region: ${opt:region, self:provider.region}
snsTopic: ${self:service}-${self:provider.stage}-Topic
snsTopicArn: { "Fn::Join" : ["", ["arn:aws:sns:${self:custom.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.snsTopic}" ] ] }
then just use the arn on the places you need.
Or you can use the plugin https://github.com/silvermine/serverless-plugin-external-sns-events to basic reference the topic name.
If you have only 1 serverless.yml and don't want to have a separate cloudformation file I would use the first option
EDIT:
To use the arn follow the instructions on serverless https://serverless.com/framework/docs/providers/aws/events/sns#using-a-pre-existing-topic
functions:
dispatcher:
handler: <handler>
events:
- sns:
arn: ${self:custom.snsTopicArn}
since you have the sns topic on the same serverless.yml, you can even ignore the snsTopicArn variable and build it like one of the suggestions using !Ref which should be a better option for you:
functions:
dispatcher:
handler: <handler>
events:
- sns:
arn: !Ref SNSTopic
topicName: ${self:custom.snsTopic}
full example:
service: testsns
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
functions:
hello:
handler: handler.hello
events:
- sns:
arn: !Ref SuperTopic
topicName: MyCustomTopic
Properties:
Policies:
- AWSLambdaExecute
- Statement:
- Effect: Allow
Action:
- 'lambda:InvokeFunction'
resources:
Resources:
SuperTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MyCustomTopic
Finally got it!
I end up removing my SNS TOPIC declaration from resource section of serverless.yml added under iamRoleStatements, something like this,
iamRoleStatements:
- Effect: Allow
Action:
- SNS:Publish
Resource: { "Fn::Join" : ["", ["arn:aws:sns:${self:provider.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.mySnsTopic}" ] ] }
And added variables in the custom section
custom:
mySnsTopic: "${self:service}-${self:provider.stage}-sns-consume"
mySnsTopicArn: { "Fn::Join" : ["", ["arn:aws:sns:${self:provider.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.mySnsTopic}" ] ] }
then mapped this to the lambda function events
Function:
handler: src/sample/file.lambdaHandler
role: s3FullAccessRole
events: ${self:custom.mySnsTopicArn}
Properties:
Policies:
- AWSLambdaExecute
For reference link
I've been racking my brain for days and can't find a solution.
I have an app written in python and want to use the variables which the user has input via text, png and checkbox in to a database but securely using AWS lambda instead of hard coding the db in to the app.
I've set up all the instances VPC with DBs inside them. I can create a deployment .py which can be invoked by the AWS lambda but how can I use the client to provide variables for this deployment? Or is there another way to do this?
Many thanks,
p.s the app also uses cognito for auth (using warrant).
Here's an example of how to use Secrets Manager to hide DB connection info from your code. I setup RDS & secret manager using a Cloudformation script.
DBSecret:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Sub '${AWS::StackName}-${MasterUsername}'
Description: DB secret
GenerateSecretString:
SecretStringTemplate: !Sub '{"username": "${MasterUsername}"}'
GenerateStringKey: "password"
PasswordLength: 16
ExcludeCharacters: '"#/\'
DBInstance:
Type: AWS::RDS::DBInstance
DependsOn: DBSecret
DeletionPolicy: Delete
Properties:
DBInstanceClass: !FindInMap [InstanceSize, !Ref EnvironmentSize, DB]
StorageType: !FindInMap [InstanceSize, !Ref EnvironmentSize, TYPE]
AllocatedStorage: !FindInMap [InstanceSize, !Ref EnvironmentSize, STORAGE]
AutoMinorVersionUpgrade: true
AvailabilityZone: !Select [0, !Ref AvailabilityZones ]
BackupRetentionPeriod: !Ref BackupRetentionPeriod
CopyTagsToSnapshot: false
DBInstanceIdentifier: !Ref AWS::StackName
DBSnapshotIdentifier: !If [isRestore, !Ref SnapToRestore, !Ref "AWS::NoValue"]
DBSubnetGroupName: !Ref DBSubnets
DeleteAutomatedBackups: true
DeletionProtection: false
EnableIAMDatabaseAuthentication: false
EnablePerformanceInsights: false
Engine: postgres
EngineVersion: 10.5
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, '::username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, '::password}}' ]]
MonitoringInterval: 0
MultiAZ: !If [isMultiAZ, true, false]
PreferredBackupWindow: '03:00-03:30'
PreferredMaintenanceWindow: 'mon:04:00-mon:04:30'
PubliclyAccessible: false
StorageEncrypted: !If [isMicro, false, true]
VPCSecurityGroups: !Ref VPCSecurityGroups
DBSecretAttachment:
Type: AWS::SecretsManager::SecretTargetAttachment
Properties:
SecretId: !Ref DBSecret
TargetId: !Ref DBInstance
TargetType: 'AWS::RDS::DBInstance'
The above script creates an RDS instance, a DB secret with all connection information. Please note the password is created by the script & stored in the secret.
Sample code in Nodejs to retrieve secret value with known secret id
const params = {
SecretId: <secret id>,
};
secretsmanager.getSecretValue(params, async function(err, data) {
if (err)
console.log(err);
else {
console.log(data.SecretString);
const data1= JSON.parse(data.SecretString);
dbPort = data1.port;
dbUsername = data1.username;
dbPassword = data1.password;
dbName = data1.dbname;
dbEndpoint = data1.host;
}
});
Here's what your RDS security group should look like.
where source is the security group of your lambda.
Hope this helps.
I've got a lambda function like this:
import boto3
import os
import json
step_functions = boto3.client('stepfunctions')
workers_topic = boto3.resource('sns').Topic(os.environ.get("WORKERS_TOPIC_ARN"))
def test_push_to_workers_sns(event, context):
activity_response = \
step_functions.get_activity_task(
activityArn=os.environ.get("ACKNOWLEDGE_ACTIVITY_ARN"),
workerName='test_push_to_workers_sns'
)
task_token, input_ = activity_response['taskToken'], activity_response['input']
print(f"Task token is {task_token}")
print(f"Input is {input}")
if not task_token:
print("No activity found")
return
workers_topic.publish(Message="blah blah")
When I set off an execution of the step function I have and it reaches the activity, I've repeatedly checked that running aws stepfunctions get-activity-task --activity-arn <ACKNOWLEDGE_ACTIVITY_ARN> on my terminal returns a taskToken and input that are both correct. However this lambda function seems to always time out regardless of whether or not the activity is running (I've got my timeout value set to 1 min 15 secs on the lambda function, and the activity state on the step function's timeout at 1 hour)
I checked this case using following CloudFormation template and it works:
AWSTemplateFormatVersion: "2010-09-09"
Description: Stack creating AWS Step Functions state machine and lambda function calling GetActivityTask.
Resources:
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Handler: "index.handler"
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: |
import boto3
import os
import json
step_functions = boto3.client('stepfunctions')
workers_topic = boto3.resource('sns').Topic(os.environ.get("WORKERS_TOPIC_ARN"))
def handler(event, context):
activity_response = step_functions.get_activity_task(
activityArn=os.environ.get("ACKNOWLEDGE_ACTIVITY_ARN"),
workerName='test_push_to_workers_sns'
)
if 'taskToken' not in activity_response:
return
task_token, task_input = activity_response['taskToken'], activity_response['input']
print(f"Task token is {task_token}")
print(f"Input is {input}")
workers_topic.publish(Message="blah blah")
step_functions.send_task_success(
taskToken=task_token,
output=task_input
)
Runtime: "python3.6"
Timeout: 25
Environment:
Variables:
WORKERS_TOPIC_ARN: !Ref WorkersTopic
ACKNOWLEDGE_ACTIVITY_ARN: !Ref AcknowledgeActivity
StateMachine:
Type: AWS::StepFunctions::StateMachine
Properties:
RoleArn: !GetAtt StatesExecutionRole.Arn
DefinitionString: !Sub
- >
{
"Comment": "State Machine for GetActivityTask testing purposes.",
"StartAt": "FirstState",
"States": {
"FirstState": {
"Type": "Task",
"Resource": "${ACKNOWLEDGE_ACTIVITY_ARN}",
"End": true
}
}
}
- ACKNOWLEDGE_ACTIVITY_ARN: !Ref AcknowledgeActivity
AcknowledgeActivity:
Type: AWS::StepFunctions::Activity
Properties:
Name: !Sub ${AWS::AccountId}-AcknowledgeActivity
WorkersTopic:
Type: AWS::SNS::Topic
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: StepFunctionsAccess
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- states:GetActivityTask
- states:SendTaskFailure
- states:SendTaskSuccess
Resource: arn:aws:states:*:*:*
- PolicyName: SNSAccess
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- SNS:Publish
Resource: arn:aws:sns:*:*:*
StatesExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- !Sub states.${AWS::Region}.amazonaws.com
Action: sts:AssumeRole
Path: "/"
Policies: []
I created execution manually from Step Functions console and executed Lambda manually from Lambda console.
Keep in mind that after you take taskToken using GetActivityTask the execution related with it is waiting for response (SendTaskSuccess or SendTaskFailure) until it reaches timeout (default is very long). So if you have taken token before then that execution isn't available for GetAcitivtyTask. You can lookup status of execution in Step Functions console looking at events for specific execution.
You should call SendTaskSuccess or SendTaskFailure from your code after getting token from GetActivityTask (otherwise execution will be hanging until it reaches timeout or is stopped).
Aside from original question: GetActivityTask is not designed to be called from Lambda. You can pass Lambda Function as resource to state machine (instead of activity) and it will be called when execution reaches specified state (event in handler will contain execution state). Activities should be used only for long-running jobs on dedicated machines (EC2, ECS). I should also point that there are service limits for GetActivityTask calls (25 RPS with bucket of size 1000) and Lambda-based states are limited basically only by transition count limit (400 per second with bucket of size 800). You can read more about step function limits here: https://docs.aws.amazon.com/step-functions/latest/dg/limits.html