I am using below python boto3 code to start Ec2
import boto3
region='us-east-1'
instance_id = 'i-06ce851edfXXXXXX'
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
resp = ec2.describe_instance_status(InstanceIds=[str(instance_id)],
IncludeAllInstances=True)
print("Response = ",resp)
instance_status = resp['InstanceStatuses'][0]['InstanceState']['Code']
print("Instance status =", instance_status)
if instance_status == 80:
ec2.start_instances(InstanceIds=[instance_id])
print("Started instance with Instance_id",instance_id)
elif instance_status == 16:
ec2.stop_instances(InstanceIds=[instance_id])
print("Stopped EC2 with Instance-ID",instance_id)
else:
print("No desired state found")
When instance is in running status i am able to stop the instance by running this lambda.
But when instance is in stopped state and i run Lambda i get below message and it show no error.But when i check in console instance is still in stopped state.I am not able to find out why instance is not getting in running stage.
Instance status = 80
Started instance with Instance_id i-06ce851edfXXXXXX
Below is IAM role used
{
"Action": [
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:RebootInstances"
],
"Resource": [
"arn:aws:ec2:us0east-1:2x83xxxxxxxxxx:instance/i-06ce851edfXXXXXX"
],
"Effect": "Allow"
Your code is working. I verified it on my test instance with my lambda.
I reformatted it a bit to be easier to read, but it worked without any changes (except instance id). I can stop running instance. Then I can start stopped instance.
One thing to note is that stopping and starting take time. If you execute your function to fast, it won't be able to start an instance in a stopping state. Maybe that's why you thought it did not work.
Also make sure you increase your lambda's default timeout from 3 seconds to 10 or more.
import boto3
region='us-east-1'
instance_id = 'i-08a1e399b3d299c2d'
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
resp = ec2.describe_instance_status(
InstanceIds=[str(instance_id)],
IncludeAllInstances=True)
print("Response = ",resp)
instance_status = resp['InstanceStatuses'][0]['InstanceState']['Code']
print("Instance status =", instance_status)
if instance_status == 80:
ec2.start_instances(InstanceIds=[instance_id])
print("Started instance with Instance_id",instance_id)
elif instance_status == 16:
ec2.stop_instances(InstanceIds=[instance_id])
print("Stopped EC2 with Instance-ID",instance_id)
else:
print("No desired state found")
I found out the issue.Root volume of EC2 was encrypted so i have added KMS permission in role and it worked.
Indeed, encryption of the root volume is the issue here.
You can add inline policy to the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:*"
],
"Resource": "*"
}]
}
Note that it will grant full access form KMS for all of the resources. If you want, you can restrict the range of this policy to some specific resource.
More info about this problem here: https://aws.amazon.com/premiumsupport/knowledge-center/encrypted-volumes-stops-immediately/
Related
I am trying to create a jenkins job that when triggered will update all the Elasticache Redis replication groups with a certain tag.
The main workflow is that I find all the Redis replica groups in a region for example us-east-1
def findAllRedisReplicationGroups(label, region) {
query = "ReplicationGroups[*].{arn: ARN}"
output = sh(
script: "aws --region ${region} elasticache describe-replication-groups --query '${query}' --output text",
label: label,
returnStdout: true
)
return output
}
The output will be a string example here
String a = """
arn:aws:elasticache:us-west-2:AccountID:replicationgroup:application-2-test
arn:aws:elasticache:us-west-2:AccountID:replicationgroup:application-1-test
"""
Then I split the string into a list with each arn being an element.
Then using the for loop I will iterate through all the Redis replica groups and get their tags, if the tag is like Environment: test then the arn of the Redis replica group will be added to the list of arn
def findCorrectEnvReplicationGroups(label, region, environment, redis_arns){
def arn_list = redis_arns.split();
def correct_env_arn_list = [];
for( def arn : arn_list) {
def redisTags = getRedisTags(label, region, arn)
def jsonSlurper = new groovy.json.JsonSlurper()
def object = jsonSlurper.parseText(redisTags)
EnvironmentFromTag = object.TagList.find { it.Key == "Environment"}
if (EnvironmentFromTag.Value == environment) {
correct_env_arn_list.add(arn)
}
break
}
return correct_env_arn_list
}
def getRedisTags(label, region, arn) {
output = sh(
script: "aws --region ${region} elasticache list-tags-for-resource --resource-name ${arn} --output json",
label: label,
returnStdout: true
)
return output
}
I get through 1 loop. Tested by printing out the arn for each cycle but it crashes when trying to run the script in the getRedisTags method again.
The output should be a list of arns whose tags matches
Has anyone come across such an error or has any experience with groovy and can maybe help
me out on why the jenkinsfile crashes when trying to run the aws cli command in a loop
many thanks
Still not totally sure why it didnt work but with using groovy-s built in iterators I got it working
First I get a list of all the Redis Replication Groups
Then insert that into findEnvironmentReplicationGroups where I use the .findAll to input the arns one by one into the method getRedisTags which returns me a map (example = [Environment: "test"]) each time
Then I store the output [Environment: "test"] into the variable tags and check if it matches the environment given to the method
If it matches then it is added to correctArns which is ultimately returned back
def findAllRedisReplicationGroups(region) {
def query = "ReplicationGroups[*].ARN"
def output = sh(
script: "aws --region ${region} elasticache describe-replication-groups --query '${query}' --output json",
label: "Find all Redis Replication groups in the region",
returnStdout: true
)
readJSON(text: output)
}
def findEnvironmentReplicationGroups(region, environment, listOfRedisArns){
def correctArns = listOfRedisArns.findAll { arn ->
def tags = getRedisTags(region, arn).collectEntries { [it.Key, it.Value] }
tags.Environment == environment
}
correctArns
}
def getRedisTags(region, arn) {
def query = "TagList[?Key==`Environment`]"
def output = sh(
script: "aws --region ${region} elasticache list-tags-for-resource --resource-name ${arn} --query '${query}' --output json",
label: "Get tags for a Redis replication group",
returnStdout: true
)
readJSON(text: output)
}
The main task is to protect video from downloading.
To achieve it, we decided to set up Video Streaming from S3.
The project has an PHP API and a client. The API generates Pre-Signed URL to where the video should be uploaded in S3 bucket. Then, client can request video by a CDN URL. But, with signed urls, video can be downloaded from the client.
We found an approach, when video is converted to MPEG-DASH with AWS Elemental MediaConverter. The Job for MediaConverter can be created via API. Then it should be streamed via AWS Elemental MediaPackage and CloudFront.
The problems are:
How to understand when the video upload is finished, to start MediaConverter Job?
MPEG-DASH file has a .mpd manifest, but MediaPackage requires .smil manifest. How to auto generate this file from a .mpd?
P.S. If I'm wrong somewhere, please, correct me.
How to understand when the video upload is finished, to start MediaConverter Job?
It could be achieved by the following workflow
the ingest user uploads a video to the watchfolder bucket in S3
the s3:PutItem event triggers a Lambda function that calls MediaConvert to convert the videos.
Converted videos are stored in S3 by MediaConvert
High level instructions as follow.
create an Amazon S3 bucket to use for uploading videos to be converted. Bucket name example: vod-watchfolder-firstname-lastname
create an Amazon S3 bucket to use for storing converted video outputs from MediaConvert (enables public read, Static website hosting and CORS)
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
create an IAM role to Pass to MediaConvert. Use the IAM console to create a new role. Name it MediaConvertRole and select AWS Lambda for the role type. Use inline policies to grant permissions to other resources needed for the lambda to execute.
Create an IAM Role for Your Lambda function. Use the IAM console to create a role. Name it VODLambdaRole and select AWS Lambda for the role type. Attach the managed policy called AWSLambdaBasicExecutionRole to this role to grant the necessary CloudWatch Logs permissions. Use inline policies to grant permissions to other resources needed for the lambda to execute.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "Logging"
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
"ARNforMediaConvertRole"
],
"Effect": "Allow",
"Sid": "PassRole"
},
{
"Action": [
"mediaconvert:*"
],
"Resource": [
"*"
],
"Effect": "Allow",
"Sid": "MediaConvertService"
},
{
"Action": [
"s3:*"
],
"Resource": [
"*"
],
"Effect": "Allow",
"Sid": "S3Service"
}
]
}
Create a lambda Function for converting videos. Use the AWS Lambda console to create a new Lambda function called VODLambdaConvert that will process the API requests. Use the provided convert.py example implementation for your function code.
#!/usr/bin/env python
import glob
import json
import os
import uuid
import boto3
import datetime
import random
from urllib.parse import urlparse
import logging
from botocore.client import ClientError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
S3 = boto3.resource('s3')
def handler(event, context):
'''
Watchfolder handler - this lambda is triggered when video objects are uploaded to the
SourceS3Bucket/inputs folder.
It will look for two sets of file inputs:
SourceS3Bucket/inputs/SourceS3Key:
the input video to be converted
SourceS3Bucket/jobs/*.json:
job settings for MediaConvert jobs to be run against the input video. If
there are no settings files in the jobs folder, then the Default job will be run
from the job.json file in lambda environment.
Ouput paths stored in outputGroup['OutputGroupSettings']['DashIsoGroupSettings']['Destination']
are constructed from the name of the job settings files as follows:
s3://<MediaBucket>/<basename(job settings filename)>/<basename(input)>/<Destination value from job settings file>
'''
assetID = str(uuid.uuid4())
sourceS3Bucket = event['Records'][0]['s3']['bucket']['name']
sourceS3Key = event['Records'][0]['s3']['object']['key']
sourceS3 = 's3://'+ sourceS3Bucket + '/' + sourceS3Key
destinationS3 = 's3://' + os.environ['DestinationBucket']
mediaConvertRole = os.environ['MediaConvertRole']
application = os.environ['Application']
region = os.environ['AWS_DEFAULT_REGION']
statusCode = 200
jobs = []
job = {}
# Use MediaConvert SDK UserMetadata to tag jobs with the assetID
# Events from MediaConvert will have the assetID in UserMedata
jobMetadata = {}
jobMetadata['assetID'] = assetID
jobMetadata['application'] = application
jobMetadata['input'] = sourceS3
try:
# Build a list of jobs to run against the input. Use the settings files in WatchFolder/jobs
# if any exist. Otherwise, use the default job.
jobInput = {}
# Iterates through all the objects in jobs folder of the WatchFolder bucket, doing the pagination for you. Each obj
# contains a jobSettings JSON
bucket = S3.Bucket(sourceS3Bucket)
for obj in bucket.objects.filter(Prefix='jobs/'):
if obj.key != "jobs/":
jobInput = {}
jobInput['filename'] = obj.key
logger.info('jobInput: %s', jobInput['filename'])
jobInput['settings'] = json.loads(obj.get()['Body'].read())
logger.info(json.dumps(jobInput['settings']))
jobs.append(jobInput)
# Use Default job settings in the lambda zip file in the current working directory
if not jobs:
with open('job.json') as json_data:
jobInput['filename'] = 'Default'
logger.info('jobInput: %s', jobInput['filename'])
jobInput['settings'] = json.load(json_data)
logger.info(json.dumps(jobInput['settings']))
jobs.append(jobInput)
# get the account-specific mediaconvert endpoint for this region
mediaconvert_client = boto3.client('mediaconvert', region_name=region)
endpoints = mediaconvert_client.describe_endpoints()
# add the account-specific endpoint to the client session
client = boto3.client('mediaconvert', region_name=region, endpoint_url=endpoints['Endpoints'][0]['Url'], verify=False)
for j in jobs:
jobSettings = j['settings']
jobFilename = j['filename']
# Save the name of the settings file in the job userMetadata
jobMetadata['settings'] = jobFilename
# Update the job settings with the source video from the S3 event
jobSettings['Inputs'][0]['FileInput'] = sourceS3
# Update the job settings with the destination paths for converted videos. We want to replace the
# destination bucket of the output paths in the job settings, but keep the rest of the
# path
destinationS3 = 's3://' + os.environ['DestinationBucket'] + '/' \
+ os.path.splitext(os.path.basename(sourceS3Key))[0] + '/' \
+ os.path.splitext(os.path.basename(jobFilename))[0]
for outputGroup in jobSettings['OutputGroups']:
logger.info("outputGroup['OutputGroupSettings']['Type'] == %s", outputGroup['OutputGroupSettings']['Type'])
if outputGroup['OutputGroupSettings']['Type'] == 'FILE_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['FileGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['FileGroupSettings']['Destination'] = destinationS3+templateDestinationKey
elif outputGroup['OutputGroupSettings']['Type'] == 'HLS_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['HlsGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['HlsGroupSettings']['Destination'] = destinationS3+templateDestinationKey
elif outputGroup['OutputGroupSettings']['Type'] == 'DASH_ISO_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['DashIsoGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['DashIsoGroupSettings']['Destination'] = destinationS3+templateDestinationKey
elif outputGroup['OutputGroupSettings']['Type'] == 'DASH_ISO_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['DashIsoGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['DashIsoGroupSettings']['Destination'] = destinationS3+templateDestinationKey
elif outputGroup['OutputGroupSettings']['Type'] == 'MS_SMOOTH_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['MsSmoothGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['MsSmoothGroupSettings']['Destination'] = destinationS3+templateDestinationKey
elif outputGroup['OutputGroupSettings']['Type'] == 'CMAF_GROUP_SETTINGS':
templateDestination = outputGroup['OutputGroupSettings']['CmafGroupSettings']['Destination']
templateDestinationKey = urlparse(templateDestination).path
logger.info("templateDestinationKey == %s", templateDestinationKey)
outputGroup['OutputGroupSettings']['CmafGroupSettings']['Destination'] = destinationS3+templateDestinationKey
else:
logger.error("Exception: Unknown Output Group Type %s", outputGroup['OutputGroupSettings']['Type'])
statusCode = 500
logger.info(json.dumps(jobSettings))
# Convert the video using AWS Elemental MediaConvert
job = client.create_job(Role=mediaConvertRole, UserMetadata=jobMetadata, Settings=jobSettings)
except Exception as e:
logger.error('Exception: %s', e)
statusCode = 500
raise
finally:
return {
'statusCode': statusCode,
'body': json.dumps(job, indent=4, sort_keys=True, default=str),
'headers': {'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*'}
}
Make sure to configure your function to use the VODLambdaRole IAM role you created in the previous section.
Create a S3 Event Trigger for your Convert lambda. Use the AWS Lambda console to add a putItem trigger from the vod-watchfolder-firstname-lastname S3 bucket to the VODLambdaConvert lambda.
test the watchfolder automation. You can use your own video or use the test.mp4 video included in this folder to test the workflow.
For detail, please refer to this document https://github.com/aws-samples/aws-media-services-vod-automation/blob/master/MediaConvert-WorkflowWatchFolderAndNotification/README-tutorial.md
MPEG-DASH file has a .mpd manifest, but MediaPackage requires .smil manifest. How to auto generate this file from a .mpd?
as of today, MediaConvert has no auto generate smil file function. Therefore, you could either consider to change the output to HLS and ingest to Mediapackage. Or, creating the smil file manually. Reference document are below
HLS VOD ingest to Mediapackage: https://github.com/aws-samples/aws-media-services-simple-vod-workflow/blob/master/13-VODMediaPackage/README-tutorial.md
Creating smil file: https://docs.aws.amazon.com/mediapackage/latest/ug/supported-inputs-vod-smil.html
I have a bit of an issue, I have this code that I created months ago and I am trying to modify it to get more info, this is where I am stuck.
I am calling describe_instances and iterating through getting the info I need. But I need to get the encryption details also of each ec2 instance volume . I believe that is under "describe_volumes"
How would I add this so my prints seem seemless, is it possible?
response = client.describe_instances(Filters=[{'Name':'tag-key','Values':['Name']}])
ec2tags = client.describe_tags()
# pprint(response)
for item in response['Reservations']:
#pprint(item['Instances'])
pprint("AWS Account ID: {}".format(item['OwnerId']))
for instance_id in item['Instances']:
#print(instance_id)
Tags = instance_id['Tags']
tag_name_value = ""
for tag in Tags:
if tag['Key'] == "Name":
tag_name_value = tag["Value"]
break
#Tags = instance_id['Tags']['Value']
State = instance_id['State']['Name']
#print("EC2 Name: {}".format(Tags))
print("EC2 Name: {}".format(tag_name_value))
print("Instance Id is: {}\nInstance Type is: {}".format(instance_id['InstanceId'],instance_id['InstanceType']))
print("EC2 State is: {}".format(State))
if 'VpcId' in instance_id:
print("VPC Id is: {}".format(instance_id['VpcId']))
for volumes in instance_id['BlockDeviceMappings']:
vol_list = [ vol['Ebs']['VolumeId'] for vol in instance_id['BlockDeviceMappings']]
when I run it, I get this:
'AWS Account ID: 123456789012'
EC2 Name: ec2_web
Instance Id is: i-0d3c64d8771ru57574
Instance Type is: t2.small
EC2 State is: stopped
VPC Id is: vpc-026efa5966396
I want it to look like this
'AWS Account ID: 123456789012'
EC2 Name: ec2_web
Instance Id is: i-0d3c64d8771ru57574
Instance Type is: t2.small
EC2 State is: stopped
VPC Id is: vpc-026efa5966396
Volume Id: ['vol-054f5ef5eeb2025b0']
Volume Encrypt: true or false
You can use describe_volumes.
response = client.describe_instances(Filters=[{'Name':'tag-key','Values':['Name']}])
ec2tags = client.describe_tags()
# pprint(response)
for item in response['Reservations']:
#pprint(item['Instances'])
pprint("AWS Account ID: {}".format(item['OwnerId']))
for instance_id in item['Instances']:
#print(instance_id)
Tags = instance_id['Tags']
tag_name_value = ""
for tag in Tags:
if tag['Key'] == "Name":
tag_name_value = tag["Value"]
break
#Tags = instance_id['Tags']['Value']
State = instance_id['State']['Name']
#print("EC2 Name: {}".format(Tags))
print("EC2 Name: {}".format(tag_name_value))
print("Instance Id is: {}\nInstance Type is: {}".format(instance_id['InstanceId'],instance_id['InstanceType']))
print("EC2 State is: {}".format(State))
if 'VpcId' in instance_id:
print("VPC Id is: {}".format(instance_id['VpcId']))
for volumes in instance_id['BlockDeviceMappings']:
vol_list = [vol['Ebs']['VolumeId'] for vol in instance_id['BlockDeviceMappings']]
volume_infos = client.describe_volumes(VolumeIds=vol_list)
for vol in volume_infos['Volumes']:
print(f"Volume Id: {vol['VolumeId']}")
print(f"Volume Encrypt: {vol['Encrypted']}")
Note that there are some indentation issues in your code. Thus you have to fix them as well.
I am trying to use terraform to scale RDS cluster for Aurora.
I am setting up an RDS instance with 3 servers - 1 writer and 2 read-replicas. Here is my requirement
when any of the servers fail, add a new server such that the replica always has a minimum of 3 servers.
when CPU usage of any host exceeds 50% then add a new server to the cluster. The Max number of servers is 4.
Is it possible to create a policy such that when any of the 3 servers fail, then create a new server for that RDS instance? If yes, how to monitor server failure?
Do I need to use appAutoScaling or use autoScaling or both?
This is the link that matches my use-case :
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appautoscaling_policy
I developed an example of an terraform config file for your question. It is ready to used but should be treated as an example only for learning and testing purposes. It was tested in us-east-1 region using a default VPC with terraform 0.13 and AWS provider 3.6.
The key resources created by the example terraform config file are:
Public MySQL aurora cluster 1 writer and 2 replicas.
Application auto-scaling policy for Aurora replicas based on CPU utilization (50%) with min and max capacity of 2 and 4 respectively.
SNS topic and SQS queue subscribed to the topic. With the queue its easy to view SNS messages without the need to configure emails or lambda.
Two RDS Events subscriptions. One (e.g. failure) for cluster-level events and the second one for instance-level events. In both cases, the events are published to the SNS topic and then available in SQS for viewing.
Below I expand on the questions asked and the example config file.
Aurora MySQL cluster with 1 writer and 2 replicas
The cluster will be provisioned with 1 writer and 2 replicas.
Autoscaling policy for the replicas
An application-auto-scaling which is based on TargetTrackingScaling for RDSReaderAverageCPUUtilization. The scaling policy is based on the replicas overall CPU utilization (50%), not its individual replicas.
This is a good practice, as aurora replicas are load balanced automatically at connection level. This means that the new connections will be roughly spread equally across available replicas, on condition that you are using reader enpoint.
Also any alarm or scaling policy which you may apply to individual replicas will be void once the replicas get replaced by scaling in/out activities or failures. This is because any scaling policy would be bound to a specific db instance. Once the instance is gone, the alarm will not work.
The alarms associated with the policy that the AWS creates on your behalf can be viewed in CLoudWatch Alarms Console.
Aurora db instance failures
If any db instance fails, Aurora will automatically proceed with fixing the problem, which can include restarting db instance, promoting a read replica as a new master, restring MySQL, or fully replacing a failed instance.
You can simulate these events yourself to some extend as described in Testing Amazon Aurora Using Fault Injection Queries .
Test failover to read replica
aws rds failover-db-cluster --db-cluster-identifier aurora-cluster-demo
Test crash of master instance
This will result in automated restart of the instance
mysql -h <endpoint> -u root -e "ALTER SYSTEM CRASH INSTANCE;"
Test crash of reader instance
This will result in restarting MySQL.
mysql -h <endpoint> -u root -e "ALTER SYSTEM SIMULATE 100 PERCENT READ REPLICA FAILURE TO ALL FOR INTERVAL 10 MINUTE;"
Test replacement of the reader
You can simulate total failure of the reader instance by manually deleting it in
the console. Once deleted, Aurora will provision a replacement automatically.
Monitor cluster failure
You can use Amazon RDS Event Notification to automatically detect and respond to variety of events associated with your Aurora cluster and its instances. Failures are one of the events captured by the RDS Event Notification mechanism.
You can subscribe to a category of events of interest and receive notifications to SNS. Once the events are detected and published into SNS you can do what you want with it. Examples are, invoke a lambda event to analyze the event and the current state of your Aurora cluster, execute corrective actions or send email notifications.
For example, when you manually force failover as earlier, you will get an message
with the following info (only fragment shown):
\"Event Message\":\"Started cross AZ failover to DB instance: aurora-cluster-demo-1\"
and later:
\"Event Message\":\"Completed failover to DB instance: aurora-cluster-demo-1\"}"
The example terraform config files subscribes to a number of categories. Thus you would have to fine-tune them to exactly what you require. You could also subscribe to all of them, and have a lambda function analyze them when as they happen and decide if they should be archived only, or the function should execute some automated procedures.
AppAutoScaling or AutoScaling
Aurora read replicates are scaled using application-auto-scaling, not AutoScaling (I assume here that you mean EC2 AutoScaling). EC2 AutoScaling is used only for regular EC2 instances, not for RDS.
Example terraform config file
provider "aws" {
# YOUR DATA
region = "us-east-1"
}
data "aws_vpc" "default" {
default = true
}
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
database_name = "myauroradb"
master_username = "root"
master_password = "bar4343sfdf233"
vpc_security_group_ids = [aws_security_group.allow_mysql.id]
backup_retention_period = 1
skip_final_snapshot = true
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = 3
identifier = "aurora-cluster-demo-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = "db.t2.small"
publicly_accessible = true
engine = aws_rds_cluster.default.engine
engine_version = aws_rds_cluster.default.engine_version
}
resource "aws_security_group" "allow_mysql" {
name = "allow_mysql"
description = "Allow Mysql inbound Internet traffic"
vpc_id = data.aws_vpc.default.id
ingress {
description = "Mysql poert"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_appautoscaling_target" "replicas" {
service_namespace = "rds"
scalable_dimension = "rds:cluster:ReadReplicaCount"
resource_id = "cluster:${aws_rds_cluster.default.id}"
min_capacity = 2
max_capacity = 4
}
resource "aws_appautoscaling_policy" "replicas" {
name = "cpu-auto-scaling"
service_namespace = aws_appautoscaling_target.replicas.service_namespace
scalable_dimension = aws_appautoscaling_target.replicas.scalable_dimension
resource_id = aws_appautoscaling_target.replicas.resource_id
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "RDSReaderAverageCPUUtilization"
}
target_value = 50
scale_in_cooldown = 300
scale_out_cooldown = 300
}
}
resource "aws_sns_topic" "default" {
name = "rds-events"
}
resource "aws_sqs_queue" "default" {
name = "aurora-notifications"
}
resource "aws_sns_topic_subscription" "user_updates_sqs_target" {
topic_arn = aws_sns_topic.default.arn
protocol = "sqs"
endpoint = aws_sqs_queue.default.arn
}
resource "aws_sqs_queue_policy" "test" {
queue_url = aws_sqs_queue.default.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "First",
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "${aws_sqs_queue.default.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "${aws_sns_topic.default.arn}"
}
}
}
]
}
POLICY
}
resource "aws_db_event_subscription" "cluster" {
name = "cluster-events"
sns_topic = aws_sns_topic.default.arn
source_type = "db-cluster"
event_categories = [
"failover", "failure", "deletion", "notification"
]
}
resource "aws_db_event_subscription" "instances" {
name = "instances-events"
sns_topic = aws_sns_topic.default.arn
source_type = "db-instance"
event_categories = [
"availability",
"deletion",
"failover",
"failure",
"low storage",
"maintenance",
"notification",
"read replica",
"recovery",
"restoration",
]
}
output "endpoint" {
value = aws_rds_cluster.default.endpoint
}
output "reader-endpoint" {
value = aws_rds_cluster.default.reader_endpoint
}
I can create a VPC really quick like this:
import boto3 as boto
inst = boto.Session(profile_name='myprofile').resource('ec2')
def createVpc(nid,az='us-west-2'):
'''Create the VPC'''
vpc = inst.create_vpc(CidrBlock = '10.'+str(nid)+'.0.0/16')
vpc.create_tags(
Tags = [ { 'Key': 'Name', 'Value': 'VPC-'+nid }, ]
)
vpc.wait_until_available()
createVpc('111')
How can I check a VPC with CidrBlock: 10.111.0.0/16 or a Name: VPC-111 already exists before it gets created? I actually wanna do the same check prior to any AWS resource creation but VPC is a start. Best!
EDIT:
found that vpcs.filter can be used to query a given VPC tags; e.g.:
fltr = [{'Name':'tag:Name', 'Values':['VPC-'+str(nid)]}]
list(inst.vpcs.filter(Filters=fltr))
which returns a list object like this: [ec2.Vpc(id='vpc-43e56b3b')]. A list with length 0 (zero) is a good indication of a non-existent VPC but was wondering if there is more boto/aws way of detecting that.
Yes you need to use filters with describe_vpcs API.
The below code will list all VPC's which matches both Name Tag Value and the CIDR block:
import boto3
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_vpcs(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'<Enter you VPC name here>',
]
},
{
'Name': 'cidr-block-association.cidr-block',
'Values': [
'10.0.0.0/16', #Enter you cidr block here
]
},
]
)
resp = response['Vpcs']
if resp:
print(resp)
else:
print('No vpcs found')
CIDR block is the primary check for VPC. I would suggest to just use the CIDR Filter alone instead of clubbing with Name Tag as then you can prevent creating VPC with same CIDR Blocks.