Boto3 list of AWS services - python-3.x

I'm currently using the filter option in the get cost and usage method but i can't use service_code to filter, so i need a list of the services, but i can't find one.
So what i need is a method where i can get the name from all the services like this Amazon Elastic Compute Cloud - Compute.
Here's an example of my code
servicio = "Amazon Elastic Compute Cloud - Compute"
billing = boto3.client(
'ce', region_name=region, aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
response = billing.get_cost_and_usage(
TimePeriod={
'Start': str(fecha_inicio),
'End': str(fecha_fin)
},
Filter={
'Dimensions': {
'Key': 'SERVICE',
'Values': [servicio, ],
},
},
Granularity='MONTHLY',
Metrics=[
'AmortizedCost',
],
)

Related

Google Analytics Core Reporting API - Cannot get information with batchGet without entering a metric

I'm trying to receive data from google analytics to make specific report.
This report does not include any metric but only dimensions.
The problem is that batchGet requests a metric in order to send a request properly.
This is the how the data is requested via Data Studio
As you can see the metric section is empty.
And this is how I run the batchGet request (using python)
analytics.reports().batchGet(
body={
'reportRequests': [
{
'viewId': VIEW_ID,
'dateRanges': [{'startDate': '2022-01-01', 'endDate': '2022-01-01'}],
'metrics': [],
'dimensions': [{'name': 'ga:date'},
{'name': 'ga:transactionId'},
{'name': 'ga:adContent'},
{'name': 'ga:source'},
{'name': 'ga:medium'},
{'name': 'ga:campaign'},
{'name': 'ga:keyword'}]
}]
}).execute()
When I run this code I get an error:
"Selected dimensions and metrics cannot be queried together."
And that's because I ask both dimensions {'name': 'ga:transactionId'},{'name': 'ga:adContent'} in the same request.
How can I request this data and receiving this without any errors, because I know I can see it in Data Studio but I cannot request it via Google Analytics Core Reporting API.
Thanks in advance,
Tom
Not all dimensions and metrics can be queried together.
Selected dimensions and metrics cannot be queried together.
Google Analytics has different scopes for dimensions and metrics: Hits and Sessions. So when you build you report and you are calling hits (or 'hit-level metrics'), you must call dimensions only within that scope.
There is no work around for this error other than removing one of the offending items. you can use the Dimension and metric refrence to see which one match.
Actually I finally found an solution.
I use 2 different requests with a unique field that will map between the transactionId field and the adContent field.
def get_campaign_report(VIEW_ID, analytics, specific_day):
return analytics.reports().batchGet(
body={
'reportRequests': [
{
'viewId': VIEW_ID,
'dateRanges': [{'startDate': specific_day, 'endDate': specific_day}],
'metrics': [{'expression': 'ga:transactions'}],
'dimensions': [{'name': 'ga:clientID'},
{'name': 'ga:visitLength'},
{'name': 'ga:date'},
{'name': 'ga:source'},
{'name': 'ga:medium'},
{'name': 'ga:campaign'},
{'name': 'ga:adContent'},
{'name': 'ga:keyword'}]
}]
}
).execute()
def get_transaction_id(VIEW_ID, analytics, specific_day):
return analytics.reports().batchGet(
body={
'reportRequests': [
{
'viewId': VIEW_ID,
'dateRanges': [{'startDate': specific_day, 'endDate': specific_day}],
'metrics': [{'expression': 'ga:transactions'}],
'dimensions': [{'name': 'ga:clientID'},
{'name': 'ga:visitLength'},
{'name': 'ga:transactionId'}]
}]
}
).execute()
Then I created dictionary for each request:
which the key is [{ga:clientID}{ga:visitLength}] which is pretty unique id and then I could mapping between {ga:transactionId} to {ga:adContent}

UnauthorizedOperation when calling the CreateSnapshot operation from AWS Lambda function

**An error occurred (UnauthorizedOperation) when calling the CreateSnapshot operation: You are not authorized to perform this operation. Encoded authorization failure message: jL5ZYRDd52Y_Xpt7xet7GIyJZkUpGhgJGwCsg
AWSLambdaBasicExecutionRole AWS managed: Provides write permissions to CloudWatch Logs.
s3-read-and-write-policy Customer inline
ebs-cloudtrail-read-policy Customer inline
ebs-ssm-read-write-policy Customer inline
ebs-volume-and-snapshot-read-policy
def createEBSSnapshots(volumes_to_delete,ec2Client,ssmClient):
print('Initiating create snapshot requests')
for volume in volumes_to_delete:
# TODO: write code...
print('Creating snapshot for ',volume)
today = str(date.today())
# print("Today's date:", today)
ec2Client.create_snapshot(
Description='This snapshot is generated for volume which was not utilized since last '+str(timeWindowDeleteVol)+' hours.',
# OutpostArn='string',
VolumeId=volume,
TagSpecifications=[
{
'ResourceType': 'snapshot',
'Tags': [
{
'Key': 'unusedEBSSnapshot',
'Value': 'true'
},
{
'Key': 'unusedVolumeID',
'Value': volume
},
{
'Key': 'creationDate',
'Value': today
},
]
},
],
# DryRun=True
)

Adding multiple filters in boto3

Hi I have a requirement to fetch ec2 instance details with tags as follows
prod = monitor
test = monitor
The objective is to list instances with these tags only . I was able to add one filter but not sure how to use multiple filters in ec2.instances.filter(Filters
from collections import defaultdict
import boto3
# Connect to EC2
ec2 = boto3.resource('ec2')
# Get information for all running instances
running_instances = ec2.instances.filter(Filters=[{
'Name': 'instance-state-name',
'Values': ['running'] ,
'Name': 'tag:prod',
'Values': ['monitor']}])
ec2info = defaultdict()
for instance in running_instances:
for tag in instance.tags:
if 'Name'in tag['Key']:
name = tag['Value']
# Add instance info to a dictionary
ec2info[instance.id] = {
'Name': name,
'Type': instance.instance_type,
'State': instance.state['Name'],
'Private IP': instance.private_ip_address,
'Public IP': instance.public_ip_address,
'Launch Time': instance.launch_time
}
attributes = ['Name', 'Type', 'State', 'Private IP', 'Public IP', 'Launch Time']
for instance_id, instance in ec2info.items():
for key in attributes:
print("{0}: {1}".format(key, instance[key]))
print("------")
Your syntax does not quite seem correct. You should be supplying a list of dictionaries. You should be able to duplicate tags, too:
Filters=[
{'Name': 'instance-state-name', 'Values': ['running']},
{'Name': 'tag:prod', 'Values': ['monitor']},
{'Name': 'tag:test', 'Values': ['monitor']},
]
This should return instances with both of those tags.
If you are wanting instances with either of the tags, then I don't think you can filter it in a single call. Instead, use ec2.instances.all(), then loop through the returned instances using Python code and apply your logic.
Try this;
for example;
response = ce.get_cost_and_usage(
Granularity='MONTHLY',
TimePeriod={
'Start': start_date,
'End': end_date
},
GroupBy=[
{
'Type': 'DIMENSION',
'Key': 'SERVICE'
},
],
Filter=
{
"Dimensions": { "Key": "LINKED_ACCOUNT", "Values": [awslinkedaccount[0]] },
"Dimensions": { "Key": "RECORD_TYPE", "Values": ["Usage"] },
},
Metrics=[
'BLENDED_COST',
],
)
print(response)

Boto3: Get autoscaling group name based on multiple tags

I have a requirement to get the name of the autoscaling group based on its tags.
I have tried following code:
kwargsAsgTags = {
'Filters': [
{
'Name': 'key',
'Values': ['ApplicationName']
},
{
'Name': 'value',
'Values': ['my-app-name']
}
]
}
by using above filter I can get the autoscaling group name but since I have same 'ApplicationName' tag used in multiple environments like dev/qa/uat, the output prints all autoscaling groups belong to all environments. How do I filter the EnvironmentName as well?
For that I've tried following but this time it prints all auto-scaling groups belonging to 'dev' environment as well.
kwargsAsgTags = {
'Filters': [
{
'Name': 'key',
'Values': ['ApplicationName', 'EnvName']
},
{
'Name': 'value',
'Values': ['my-app-name', 'dev']
}
]
}

How to configure "Use AWS Glue Data Catalog for table metadata" for EMR cluster option through boto library?

I am trying to create an EMR cluster by writing a AWS lambda function using python boto library.However I am able to create the cluster but I want to use "AWS Glue Data Catalog for table metadata" so that I can use spark to directly read from the glue data catalog.While creating the EMR cluster through AWS user interface I usually check in a checkbox ("Use AWS Glue Data Catalog for table metadata") which solves my purpose.But I am not getting any clue how can I achieve the same through boto library.
Below is the python code which I am using to create the EMR cluster.
try:
connection = boto3.client(
'emr',
region_name='xxx'
)
cluster_id = connection.run_job_flow(
Name='EMR-LogProcessing',
LogUri='s3://somepath/',
ReleaseLabel='emr-5.21.0',
Applications=[
{
'Name': 'Spark'
},
],
Instances={
'InstanceGroups': [
{
'Name': "MasterNode",
'Market': 'SPOT',
'InstanceRole': 'MASTER',
'BidPrice': 'xxx',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': "SlaveNode",
'Market': 'SPOT',
'InstanceRole': 'CORE',
'BidPrice': 'xxx',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'xxx',
'KeepJobFlowAliveWhenNoSteps': True,
'TerminationProtected': False
},
VisibleToAllUsers=True,
JobFlowRole='EMR_EC2_DefaultRole',
ServiceRole='EMR_DefaultRole',
Tags=[
{
'Key': 'Name',
'Value': 'EMR-LogProcessing',
},
{
'Key': 'env',
'Value': 'dev',
},
],
)
print('cluster created with the step...', cluster_id['JobFlowId'])
except Exception as exp:
logger.info("Exception Occured in createEMRcluster!!! %s", str(exp))
I am not finding any clue how can I achieve it.Please help.
Specify the value for hive.metastore.client.factory.class using the hive-site configuration classification
[
{
"Classification": "hive-site",
"Properties": {
"hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
}
}
]
The above code snippet can be passed to boto's run_job_flow fucntion using configuration property.
Reference:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive-metastore-glue.html

Resources