Unable to delete Snapshot using boto3 using python3 - python-3.x

I using boto3 on the python3 for delete the snapshop, Getting below error while trying to remove it (This syntax was work in the python2+boto only):
Tracebak (most recent call last):
File "./snapshotcleanup.py"m line 158, in <module>
s.delete()
AttributeError: 'dict' object has no attribute 'delete'
Code :
connection = myinternalclient (User, pass)
// Custom function for connection, you may consider ec2 = boto3.client('ec2')
res = connection.describe_snapshots(OwnersIds=[XX], Filters=[{'Name' : 'tag:Name', 'Value' : ["nonimp*"]'}])
for s in res['Snapshots']:
for tag in s['Tags']:
if 'nonprod' in tag.value():
s.delete()
print("[Deleted Snapshot]: %s" % s['SnapshotId'])
Is this syntax not in the boto3 ?

To delete the snapshot, you can use delete_snapshot method.
For example:
ec2 = boto3.client('ec2')
for s in res['Snapshots']:
for tag in s['Tags']:
if tag['Value'] == 'nonprod':
ec2.delete_snapshot(SnapshotId=s['SnapshotId'])
print("[Deleted Snapshot]: %s" % s['SnapshotId'])
Please double check the code as mistakes are possible, as one can delete wrong snapshots by accident.
The above assumes that the tags have the form (Key is not checked in the code above):
{
'Key': 'env',
'Value': 'nonprod'
}

Related

AttributeError: 'ImageCollection' object has no attribute 'tag' Docker sdk 2.0

I want to build, tag and push a docker image into an ecr repository using python3, from an ec2
i'm using docker and boto3
docker==4.0.1
here i am building the image and turns out ok
response = self._get_docker_client().images.build(path=build_info.context,
dockerfile=build_info.dockerfile,
tag=tag,
buildargs=build_info.arguments,
encoding=DOCKER_ENCODING,
quiet=False,
forcerm=True)
here it fails because he cant find the attribute "tag"
self._get_docker_client().images.tag(repository=repo, tag=tag, force=True)
another way to get the same error i'm trying to give the method a target image id to make the tag using the response of the build, i get in my IDE (intellij) 2 different methods to make tags, one with "ImageApimixin" and other with "Image" as you may see so i tried a different aproach
for i in response[1]:
print(i)
if 'Successfully built' in i.get('stream', ''):
print('JECN BUILD data[stream]')
print(i['stream'])
image = i['stream'].strip().split()[-1]
print('JECN BUILD image')
print(image)
self._get_docker_client().images.tag(self, image=image, repository='999335850108.dkr.ecr.us-east-2.amazonaws.com/jenkins/alpine', tag='3.13.1', force=True)
in both cases i get the same error (i burned some code in the last try)
'ImageCollection' object has no attribute 'tag'
amazon-ebs: Process Process-1:2:
amazon-ebs: Traceback (most recent call last):
amazon-ebs: File "/root/.local/lib/python3.7/site-packages/bulldocker/services/build_service.py", line 25, in perform
amazon-ebs: build_info.image_id = self.__build(build_info)
amazon-ebs: File "/root/.local/lib/python3.7/site-packages/bulldocker/services/build_service.py", line 114, in __build
amazon-ebs: self._get_docker_client().images.tag(self, image=image, repository='999335850108.dkr.ecr.us-east-2.amazonaws.com/jenkins/alpine', tag='3.13.1', force=True)
amazon-ebs: AttributeError: 'ImageCollection' object has no attribute 'tag'
i still don't get why the library is confused here, when i look into the ImageCollection it only appears inside the Scope used by docker client and models libary but i really ran out of ideas
here i build my docker client
def get_ecr_docker_client():
print('JECN GETTING DOCKER CLIENT')
access_key_id, secret_access_key, aws_external_id = get_param_from_store()
aws_region = DEFAULT_AWS_REGION
print(os.environ)
print(access_key_id)
print(secret_access_key)
docker_client = docker.from_env()
client = boto3.client(
service_name='sts',
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=aws_region,
)
assumed_role_object = client.assume_role(
RoleArn="arn:aws:iam::999335850108:role/adl-pre-ops-jenkins-role",
RoleSessionName="AssumeRoleSession1",
ExternalId=aws_external_id
)
credentials = assumed_role_object['Credentials']
ecr_client = boto3.client(
service_name='ecr',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
region_name=aws_region
)
ecr_credentials = \
(ecr_client.get_authorization_token(registryIds=[ECR_OPERATIONAL_REGISTRY_ID]))['authorizationData'][0]
ecr_username = 'AWS'
decoded = base64.b64decode(ecr_credentials['authorizationToken'])
ecr_password = (base64.b64decode(ecr_credentials['authorizationToken']).replace(b'AWS:', b'').decode('utf-8'))
ecr_url = ecr_credentials['proxyEndpoint']
docker_client.login(
username=ecr_username, password=ecr_password, registry=ecr_url)
return docker_client

odoo create database via xmlrpc

I'm trying to script the creation a a new database + import data from other sources in odoo.
I am at the first step, creating a new database.
I have the following code, but it doesnt work :
import xmlrpc.client
print("Db name : ", end="")
db_name = input()
with xmlrpc.client.ServerProxy('127.0.0.1:8070/xmlrpc/2/db') as mod:
RES = mod.create_database(db_name, False, 'en_US')
(note that my test server does run on localhost port 8070)
The result is :
$ python3 baseodoo.py
Db name : please_work
Traceback (most recent call last):
File "baseodoo.py", line 5, in <module>
with xmlrpc.client.ServerProxy('127.0.0.1:8070/xmlrpc/2/db') as mod:
File "/usr/lib/python3.8/xmlrpc/client.py", line 1419, in __init__
raise OSError("unsupported XML-RPC protocol")
OSError: unsupported XML-RPC protocol
Am am unsure of the url ending in /db, I've got it from dispatch_rpc(…) in http.py which tests service_name for "common", "db" and "object"
Also in dispatch(…) from db.py, a method is prefixed by "exp_" so by calling create_database it should execute the exp_create_database function in db.py
I guess my reasoning is flawed but I don't know where. Help !
EDIT :
Ok I'm stupid, the url should start with "http://". Still I now get
xmlrpc.client.Fault: <Fault 3: 'Access Denied'>
EDIT2 :
There was a typo in the password I gave, so closing the question now.

How to get billing item details using the SoftLayer Python client?

How do I use the Python SoftLayer client (using v5.7.1) to determine the location (eg: dal10) for an NFS billing item (endurance storage)?
I used some other examples here on SO and came up with this, but the call failed:
objectFilter = {"billingItem": {"id": {"operation": "12345"}}}
account.getAllBillingItems(filter=objectFilter)
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/SoftLayer/transports.py", line 240, in __call__
raise _es(ex.faultCode, ex.faultString)
SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SOAP-ENV:Server): Internal Error
Try using the following python script to get the billing item detail and the location too.
import json
import SoftLayer
API_USERNAME = 'set me'
API_KEY = 'set me'
client = SoftLayer.create_client_from_env(username=API_USERNAME, api_key=API_KEY)
billingItemId = 1234
mask = "mask[location]"
try:
response = client['SoftLayer_Billing_Item'].getObject(mask=mask, id=billingItemId)
print(response)
except SoftLayer.SoftLayerAPIError as e:
"""
If there was an error returned from the SoftLayer API then bomb out with the
error message.
"""
print("Unable to retrieve the billing item information. "
% (e.faultCode, e.faultString))

Why importing data to Zoho Analytics API causes error?

My goal is to write a script in Python3 that will push data in existing table in Zoho Analytics, the script will be used by a scheduler once a week.
What I have tried to far:
I can successfully import some data using cURL commands. Like so
curl -X POST \ 'https://analyticsapi.zoho.com/api/OwnerEmail/Workspace/TableName?ZOHO_ACTION=IMPORT& ZOHO_OUTPUT_FORMAT=JSON&ZOHO_ERROR_FORMAT=JSON&ZOHO_API_VERSION=1.0&ZOHO_IMPORT_TYPE=APPEND&ZOHO_AUTO_IDENTIFY=True&ZOHO_ON_IMPORT_ERROR=ABORT&ZOHO_CREATE_TABLE=False' \ -H 'Authorization: Zoho-oauthtoken *******' \ -H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW' \ -F ZOHO_FILE='path_to_csv'
What I found out that the ReportClient provided by Zoho Analytics team
Zoho Report Client for Python is not compatible with Python 3. Hence, I installed a wrapped for this ReportClient from [here] (https://pypi.org/project/zoho-analytics-connector).
Following sample examples from Zoho website and tests in github of wrapper for Zoho in Python3, I implement something like this:
Have a class to keep my ENV variables
import os
from zoho_analytics_connector.report_client import ReportClient, ServerError
from zoho_analytics_connector.enhanced_report_client import EnhancedZohoAnalyticsClient
class ZohoTracking:
LOGINEMAILID = os.getenv("ZOHOANALYTICS_LOGINEMAIL")
REFRESHTOKEN = os.getenv("ZOHOANALYTICS_REFRESHTOKEN")
CLIENTID = os.getenv("ZOHOANALYTICS_CLIENTID")
CLIENTSECRET = os.getenv("ZOHOANALYTICS_CLIENTSECRET")
DATABASENAME = os.getenv("ZOHOANALYTICS_DATABASENAME")
OAUTH = True
TABLENAME = "My Table"
Instantiate the Client Class
def get_enhanced_zoho_analytics_client(self) -> EnhancedZohoAnalyticsClient:
assert (not self.OAUTH and self.AUTHTOKEN) or (self.OAUTH and self.REFRESHTOKEN)
rc = EnhancedZohoAnalyticsClient(
// Just setting email, token, etc using class above
...
)
return rc```
Then have a method to upload data to existing table, the data_upload() function has the problem.
def enhanced_data_upload(self):
enhanced_client = self.get_enhanced_zoho_analytics_client()
try:
with open("./import/tracking3.csv", "r") as f:
import_content = f.read()
print(type(import_content))
except Exception as e:
print(f"Error:Check if file exists in the import directory {str(e)}")
return
res = enhanced_client.data_upload(import_content=import_content, table_name=ZohoTracking.TABLENAME)
assert res
Traceback (most recent call last):
File "push2zoho.py", line 106, in <module>
sample.enhanced_data_upload()
File "push2zoho.py", line 100, in enhanced_data_upload
res = enhanced_client.data_upload(import_content=import_content, table_name=ZohoTracking.TABLENAME)
File "/Users/.../zoho_analytics_connector/enhanced_report_client.py", line 99, in data_upload
matching_columns=matching_columns)
File "/Users/.../site-packages/zoho_analytics_connector/report_client.py", line 464, in importData_v2
r=self.__sendRequest(url=url,httpMethod="POST",payLoad=payload,action="IMPORT",callBackData=None)
File "/Users/.../zoho_analytics_connector/report_client.py", line 165, in __sendRequest
raise ServerError(respObj)
File "/Users/.../zoho_analytics_connector/report_client.py", line 1830, in __init__
contHeader = urlResp.headers["Content-Type"]
TypeError: 'NoneType' object is not subscriptable
That is the error I receive. What am I missing in this puzzle? Help is appreciated
In Feb 2021 I changed this inherited Zoho code in my library.
Now it is:
contHeader = urlResp.headers.get("Content-Type",None)
which avoids the final exception you had.

Moto seems to stop mocking after upgrading boto3

I have upgraded boto3 from boto3==1.7.48 to 1.13.11, and this has broken all of my tests that use Moto. It looks like (worryingly) the mock has stopped working altogether and is trying to actually access s3, here is an example test function that was previously working:
def upload_video(self, video):
s3 = boto3.client("s3")
s3.create_bucket(Bucket=settings.AWS_STORAGE_BUCKET_NAME)
for media_key in video.upload_media_keys:
s3.upload_file(
os.path.join(
os.path.dirname(os.path.realpath(__file__)), "assets/test.mp4"
),
settings.AWS_STORAGE_BUCKET_NAME,
media_key,
)
But it now gives this error
File "{path}", line 52, in upload_video
s3.create_bucket(Bucket=settings.AWS_STORAGE_BUCKET_NAME)
File "{path}/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "{path}/lib/python3.7/site-packages/botocore/client.py", line 635, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.
Any help would be greatly appreciated. Here is the list of upgrades:
Before:
boto3 == 1.7.48
botocore == 1.10.84
moto == 1.3.6
After:
boto3==1.13.11
botocore==1.16.11
moto==1.3.14
I have no idea what changed, but I also ran into this. Here's my workaround:
Previously I had:
conn = boto3.resource("s3", region_name="ca-central-1")
conn.create_bucket(Bucket=os.environ["S3_BUCKET_NAME"]
but I changed the region to us-east-1 and everything worked
conn = boto3.resource("s3", region_name="us-east-1")
conn.create_bucket(Bucket=os.environ["S3_BUCKET_NAME"]
Since this is just a fake bucket for testing, I see no harm in using a different region if it makes things work

Resources