UnauthorizedOperation when calling the CreateSnapshot operation from AWS Lambda function - python-3.x

**An error occurred (UnauthorizedOperation) when calling the CreateSnapshot operation: You are not authorized to perform this operation. Encoded authorization failure message: jL5ZYRDd52Y_Xpt7xet7GIyJZkUpGhgJGwCsg
AWSLambdaBasicExecutionRole AWS managed: Provides write permissions to CloudWatch Logs.
s3-read-and-write-policy Customer inline
ebs-cloudtrail-read-policy Customer inline
ebs-ssm-read-write-policy Customer inline
ebs-volume-and-snapshot-read-policy
def createEBSSnapshots(volumes_to_delete,ec2Client,ssmClient):
print('Initiating create snapshot requests')
for volume in volumes_to_delete:
# TODO: write code...
print('Creating snapshot for ',volume)
today = str(date.today())
# print("Today's date:", today)
ec2Client.create_snapshot(
Description='This snapshot is generated for volume which was not utilized since last '+str(timeWindowDeleteVol)+' hours.',
# OutpostArn='string',
VolumeId=volume,
TagSpecifications=[
{
'ResourceType': 'snapshot',
'Tags': [
{
'Key': 'unusedEBSSnapshot',
'Value': 'true'
},
{
'Key': 'unusedVolumeID',
'Value': volume
},
{
'Key': 'creationDate',
'Value': today
},
]
},
],
# DryRun=True
)

Related

Name error - "name 'ssm_parameter_namee' is not defined",

I am trying to update the parameters in SSM parameters store and got the below error. What mistake am I doing? pls clarify.
Lambda Code:
#Lambda code
logger = logging.getLogger()
logger.setLevel(logging.INFO)
ssm_client = boto3.client('ssm')
parameter_name = ''
def lambda_handler(event, context):
logger.info('Printing event: {}'.format(event))
process_sns_event(event)
return None
def process_sns_event(event):
for record in (event['Records']):
event_message = record['Sns']['Message']
# convert the event message to json
message_json = json.loads(event_message)
# obtain the image state
image_state = (message_json['state']['status'])
# obtain the image name
image_name = (message_json['name'])
# assign SSM parameter based on image_name
#parameter_name = f'/ec2-image-builder/{{image_name}}/latest'
def path(imagename):
first = "/ec2-image-builder/"
last = "/latest"
result = first + imagename + last
return result
parameter_name = path(image_name)
logger.info('image_name: {}'.format(image_name))
logger.info('ssm_parameter_name: {}'.format(parameter_name))
# update the SSM parameter if the image state is available
if (image_state == 'AVAILABLE'):
logger.info('Image is available')
# obtain ami id
ami = message_json['outputResources']['amis'][0]
recipe_name = message_json['name']
logger.info('AMI ID: {}'.format(ami['image']))
# update SSM parameter
response = ssm_client.put_parameter(
#Name=parameter_name,
Name='/ec2-image-builder/linux/latest',
Description='Latest AMI ID',
Value=ami['image'],
Type='String',
Overwrite=True,
Tier='Standard'
)
logger.info('SSM Updated: {}'.format(response))
# add tags to the SSM parameter
ssm_client.add_tags_to_resource(
ResourceType='Parameter',
ResourceId=ssm_parameter_namee,
Tags=[
{
'Key': 'Source',
'Value': 'EC2 Image Builder'
},
{
'Key': 'AMI_REGION',
'Value': ami['region']
},
{
'Key': 'AMI_ID',
'Value': ami['image']
},
{
'Key': 'AMI_NAME',
'Value': ami['name']
},
{
'Key': 'RECIPE_NAME',
'Value': recipe_name
},
{
'Key': 'SOURCE_PIPELINE_ARN',
'Value': message_json['sourcePipelineArn']
},
],
)
return None
Error output
Response on test:
{ "errorMessage": "name 'ssm_parameter_namee' is not defined",
"errorType": "NameError", "requestId":
"54ad245c-84f3-4c46-9e9b-1798f86a8bce", "stackTrace": [
" File "/var/task/lambda_function.py", line 19, in lambda_handler\n process_sns_event(event)\n",
" File "/var/task/lambda_function.py", line 71, in process_sns_event\n ResourceId=ssm_parameter_namee,\n" ] }
The answer is in your error ...
Typo name or namee ? Is it ssm_parameter_namee or ssm_parameter_name ?
I highly recommend using an IDE, that finger points you to such simple things :)
logger.info('ssm_parameter_name: {}'.format(parameter_name))
ResourceId=ssm_parameter_namee

boto3 change_resource_record_sets with multiple ipadresses

How do I add an A record for a lb where I'm able to add more then one ipaddress to the same record. I the console it is possible, but I'm not sure how to do it in Python. Below is my try which only add the last Value to the record.
#!/usr/bin/env python3
import boto3
#TODO #use env variables for names, Values and zonename
def lambda_handler(event, context):
client = boto3.client('route53')
response = client.change_resource_record_sets(
HostedZoneId='Z03115902SB93XHRQS9LT',
ChangeBatch={
'Changes': [
{
'Action': 'UPSERT',
'ResourceRecordSet': {
'Name': 'web-staging-lb.ggnp3ggdjcvwpfpqhsuwda.soemdomain.com',
'Type': 'A',
'TTL': 60,
'ResourceRecords': [
{
'Value': '10.201.11.246',
'Value': '10.201.10.12',
},
],
},
},
],
'Comment': 'Record to acces private ips of alb',
},
)
ResourceRecords is a list that can contain multiple elements.
'ResourceRecords': [
{
'Value': '10.201.10.12',
},
{
'Value': '10.201.11.246',
},
Managing an A record with multiple IPs in Route53 with python boto3

Boto3 list of AWS services

I'm currently using the filter option in the get cost and usage method but i can't use service_code to filter, so i need a list of the services, but i can't find one.
So what i need is a method where i can get the name from all the services like this Amazon Elastic Compute Cloud - Compute.
Here's an example of my code
servicio = "Amazon Elastic Compute Cloud - Compute"
billing = boto3.client(
'ce', region_name=region, aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
response = billing.get_cost_and_usage(
TimePeriod={
'Start': str(fecha_inicio),
'End': str(fecha_fin)
},
Filter={
'Dimensions': {
'Key': 'SERVICE',
'Values': [servicio, ],
},
},
Granularity='MONTHLY',
Metrics=[
'AmortizedCost',
],
)

How to create s3 bucket with logging enabled and make it private using boto3?

I want to create a bucket with
Logging
Encryption
Private, and
Alert
when accessed without https. How can I achieve this?
I have tried few lines using boto3 but getting error in logging?
def create_S3_Bucket(env, filepath):
s3_client= AWSresourceconnect(filepath,'s3')
bucket_name ="s3bucket123"
print(bucket_name)
try:
s3_bucket= s3_client.create_bucket(Bucket=bucket_name)
print('bucket created')
print(s3_bucket)
response = s3_client.put_bucket_encryption(Bucket=bucket_name,
ServerSideEncryptionConfiguration={
'Rules': [
{
'ApplyServerSideEncryptionByDefault': {
'SSEAlgorithm': 'AES256'
}
},
]
}
)
print("response of encrytpion")
print(response) #prints metadata successfully
responselogging = s3_client.put_bucket_logging(
Bucket= bucket_name,
BucketLoggingStatus={
'LoggingEnabled': {
'TargetBucket':bucket_name,
'TargetGrants': [
{
'Grantee': {
'Type': 'Group',
'URI': 'http://acs.amazonaws.com/groups/global/AllUsers',
},
'Permission': 'READ',
},
],
'TargetPrefix': 'test/',
},
},
)
print("response of logging")
print(responselogging)
Output= bucket_name
except Exception as e:
Output = "error:" + str(e)
print(e) #error as An error occurred (InvalidTargetBucketForLogging) when calling the PutBucketLogging operation: You must give the log-delivery group WRITE and READ_ACP permissions to the target bucket
bucket_name = ''
retrun Output
I want to enable
Logging
Private bucket and objects
Encryption

How to configure "Use AWS Glue Data Catalog for table metadata" for EMR cluster option through boto library?

I am trying to create an EMR cluster by writing a AWS lambda function using python boto library.However I am able to create the cluster but I want to use "AWS Glue Data Catalog for table metadata" so that I can use spark to directly read from the glue data catalog.While creating the EMR cluster through AWS user interface I usually check in a checkbox ("Use AWS Glue Data Catalog for table metadata") which solves my purpose.But I am not getting any clue how can I achieve the same through boto library.
Below is the python code which I am using to create the EMR cluster.
try:
connection = boto3.client(
'emr',
region_name='xxx'
)
cluster_id = connection.run_job_flow(
Name='EMR-LogProcessing',
LogUri='s3://somepath/',
ReleaseLabel='emr-5.21.0',
Applications=[
{
'Name': 'Spark'
},
],
Instances={
'InstanceGroups': [
{
'Name': "MasterNode",
'Market': 'SPOT',
'InstanceRole': 'MASTER',
'BidPrice': 'xxx',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': "SlaveNode",
'Market': 'SPOT',
'InstanceRole': 'CORE',
'BidPrice': 'xxx',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'xxx',
'KeepJobFlowAliveWhenNoSteps': True,
'TerminationProtected': False
},
VisibleToAllUsers=True,
JobFlowRole='EMR_EC2_DefaultRole',
ServiceRole='EMR_DefaultRole',
Tags=[
{
'Key': 'Name',
'Value': 'EMR-LogProcessing',
},
{
'Key': 'env',
'Value': 'dev',
},
],
)
print('cluster created with the step...', cluster_id['JobFlowId'])
except Exception as exp:
logger.info("Exception Occured in createEMRcluster!!! %s", str(exp))
I am not finding any clue how can I achieve it.Please help.
Specify the value for hive.metastore.client.factory.class using the hive-site configuration classification
[
{
"Classification": "hive-site",
"Properties": {
"hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
}
}
]
The above code snippet can be passed to boto's run_job_flow fucntion using configuration property.
Reference:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive-metastore-glue.html

Resources