How to handle edge cases in creation of an EFS resource on AWS, using boto3 - python-3.x

I'm creating AWS Elastic File System resources using the boto3 SDK.
In the boto3 docs for EFS (linked above), there are no waiters (unlike for other actions such as launching EC2 instances). So I can't call a waiter to hold execution until the resource is created, and have to write my own. There are also a bunch of edge cases that spring to mind, and I can't find examples that handle them.
client = # Attach credentials and create an efs boto3 client
def find_or_create_file_system(self, a_token):
fs = self.client.create_file_system(CreationToken=a_token, PerformanceMode='generalPurpose')
# Returns either:
# {
# 'OwnerId': 'string',
# 'CreationToken': 'string',
# 'FileSystemId': 'string',
# 'CreationTime': datetime(2015, 1, 1),
# 'LifeCycleState': 'creating'|'available'|'deleting'|'deleted',
# 'Name': 'string',
# 'NumberOfMountTargets': 123,
# 'SizeInBytes': {
# 'Value': 123,
# 'Timestamp': datetime(2015, 1, 1)
# },
# 'PerformanceMode': 'generalPurpose'|'maxIO'
# }
# Or, if an FS is available with that creation token already, the above returns
# an error. According to boto3 docs, the error will contain the existing fs id.
# Is this an error I need to manage with try/catch? What is the syntax to get
# the id out of the error?
if there_is_an_error
# EFS already exists
if fs['LifeCycleState'] == 'creating'
# Need to wait until it's created then return its id
elif fs['LifeCycleState'] != 'available'
# It is being / has been deleted.
# What now? Is that token never usable again? Does it eventually disappear so I can reuse it? How long do I have to wait before recreating it?
# Wait until available
fs_desc = self.client.describe_file_systems(FileSystemId=fs.id)
# TODO figure out whether there's a waiter for this
while fs_desc['FileSystems'][0]['LifeCycleState'] == 'creating':
time.sleep(5)
fs_desc.update() # Updates metadata
print("EFS state: {0}".format(fs_desc['FileSystems'][0]['LifeCycleState']))
return fs.id
Question 1 Am I correct that I have to write my own waiter? Could I hijack/repurpose a waiter from other elsewhere in the API, or are there undocumented waiters?
Question 2 How do I catch the error that occurs when an instance with that token already exists? And how do I get the id out of the error message to handle that case?
Question 3 Can tokens be reused once a file system is deleted (i.e. does AWS eventually clear up, or does that token persist)?
The reason I ask Q3 is that there are no Filter={} options in client.describe_file_systems(). So at present, I'm using a token containing a simple unique text handle, to create and later retrieve an EFS unique to a customer. I could use a random UUID token, then tag with organisation name... but can't retrieve based on tag!!!
Question 4 Is that while loop robust? i.e. is there a circumstance in which AWS will perpetually return 'creating' status (which would throw me into an infinite loop)?
Thanks for any help!

Related

How to update an existing model in AWS sagemaker >= 2.0

I have an XGBoost model currently in production using AWS sagemaker and making real time inferences. After a while, I would like to update the model with a newer one trained on more data and keep everything as is (e.g. same endpoint, same inference procedure, so really no changes aside from the model itself)
The current deployment procedure is the following :
from sagemaker.xgboost.model import XGBoostModel
from sagemaker.xgboost.model import XGBoostPredictor
xgboost_model = XGBoostModel(
model_data = <S3 url>,
role = <sagemaker role>,
entry_point = 'inference.py',
source_dir = 'src',
code_location = <S3 url of other dependencies>
framework_version='1.5-1',
name = model_name)
xgboost_model.deploy(
instance_type='ml.c5.large',
initial_instance_count=1,
endpoint_name = model_name)
Now that I updated the model a few weeks later, I would like to re-deploy it. I am aware that the .deploy() method creates an endpoint and an endpoint configuration so it does it all. I cannot simply re-run my script again since I would encounter an error.
In previous versions of sagemaker I could have updated the model with an extra argument passed to the .deploy() method called update_endpoint = True. In sagemaker >=2.0 this is a no-op. Now, in sagemaker >= 2.0, I need to use the predictor object as stated in the documentation. So I try the following :
predictor = XGBoostPredictor(model_name)
predictor.update_endpoint(model_name= model_name)
Which actually updates the endpoint according to a new endpoint configuration. However, I do not know what it is updating... I do not specify in the above 2 lines of code that we need to considering the new xgboost_model trained on more data... so where do I tell the update to take a more recent model?
Thank you!
Update
I believe that I need to be looking at production variants as stated in their documentation here. However, their whole tutorial is based on the amazon sdk for python (boto3) which has artifacts that are hard to manage when I have difference entry points for each model variant (e.g. different inference.py scripts).
Since I found an answer to my own question I will post it here for those who encounter the same problem.
I ended up re-coding all my deployment script using the boto3 SDK rather than the sagemaker SDK (or a mix of both as some documentation suggest).
Here's the whole script that shows how to create a sagemaker model object, an endpoint configuration and an endpoint to deploy the model on for the first time. In addition, it shows how to update the endpoint with a newer model (which was my main question)
Here's the code to do all 3 in case you want to bring your own model and update it safely in production using sagemaker :
import boto3
import time
from datetime import datetime
from sagemaker import image_uris
from fileManager import * # this is a local script for helper functions
# name of zipped model and zipped inference code
CODE_TAR = 'your_inference_code_and_other_artifacts.tar.gz'
MODEL_TAR = 'your_saved_xgboost_model.tar.gz'
# sagemaker params
smClient = boto3.client('sagemaker')
smRole = <your_sagemaker_role>
bucket = sagemaker.Session().default_bucket()
# deploy algorithm
class Deployer:
def __init__(self, modelName, deployRetrained=False):
self.modelName=modelName
self.deployRetrained = deployRetrained
self.prefix = <S3_model_path_prefix>
def deploy(self):
'''
Main method to create a sagemaker model, create an endpoint configuration and deploy the model. If deployRetrained
param is set to True, this method will update an already existing endpoint.
'''
# define model name and endpoint name to be used for model deployment/update
model_name = self.modelName + <any_suffix>
endpoint_config_name = self.modelName + '-%s' %datetime.now().strftime('%Y-%m-%d-%HH%M')
endpoint_name = self.modelName
# deploy model for the first time
if not self.deployRetrained:
print('Deploying for the first time')
# here you should copy and zip the model dependencies that you may have (such as preprocessors, inference code, config code...)
# mine were zipped into the file called CODE_TAR
# upload model and model artifacts needed for inference to S3
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create sagemaker model and endpoint configuration
self.createSagemakerModel(model_name)
self.createEndpointConfig(endpoint_config_name, model_name)
# deploy model and wait while endpoint is being created
self.createEndpoint(endpoint_name, endpoint_config_name)
self.waitWhileCreating(endpoint_name)
# update model
else:
print('Updating existing model')
# upload model and model artifacts needed for inference (here the old ones are replaced)
# make sure to make a backup in S3 if you would like to keep the older models
# we replace the old ones and keep the same names to avoid having to recreate a sagemaker model with a different name for the update!
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create a new endpoint config that takes the new model
self.createEndpointConfig(endpoint_config_name, model_name)
# update endpoint
self.updateEndpoint(endpoint_name, endpoint_config_name)
# wait while endpoint updates then delete outdated endpoint config once it is InService
self.waitWhileCreating(endpoint_name)
self.deleteOutdatedEndpointConfig(model_name, endpoint_config_name)
def createSagemakerModel(self, model_name):
'''
Create a new sagemaker Model object with an xgboost container and an entry point for inference using boto3 API
'''
# Retrieve that inference image (container)
docker_container = image_uris.retrieve(region=region, framework='xgboost', version='1.5-1')
# Relative S3 path to pre-trained model to create S3 model URI
model_s3_key = f'{self.prefix}/'+ MODEL_TAR
# Combine bucket name, model file name, and relate S3 path to create S3 model URI
model_url = f's3://{bucket}/{model_s3_key}'
# S3 path to the necessary inference code
code_url = f's3://{bucket}/{self.prefix}/{CODE_TAR}'
# Create a sagemaker Model object with all its artifacts
smClient.create_model(
ModelName = model_name,
ExecutionRoleArn = smRole,
PrimaryContainer = {
'Image': docker_container,
'ModelDataUrl': model_url,
'Environment': {
'SAGEMAKER_PROGRAM': 'inference.py', #inference.py is at the root of my zipped CODE_TAR
'SAGEMAKER_SUBMIT_DIRECTORY': code_url,
}
}
)
def createEndpointConfig(self, endpoint_config_name, model_name):
'''
Create an endpoint configuration (only for boto3 sdk procedure) and set production variants parameters.
Each retraining procedure will induce a new variant name based on the endpoint configuration name.
'''
smClient.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
'VariantName': endpoint_config_name,
'ModelName': model_name,
'InstanceType': INSTANCE_TYPE,
'InitialInstanceCount': 1
}
]
)
def createEndpoint(self, endpoint_name, endpoint_config_name):
'''
Deploy the model to an endpoint
'''
smClient.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
def deleteOutdatedEndpointConfig(self, name_check, current_endpoint_config):
'''
Automatically detect and delete endpoint configurations that contain a string 'name_check'. This method can be used
after a retrain procedure to delete all previous endpoint configurations but keep the current one named 'current_endpoint_config'.
'''
# get a list of all available endpoint configurations
all_configs = smClient.list_endpoint_configs()['EndpointConfigs']
# loop over the names of endpoint configs
names_list = []
for config_dict in all_configs:
endpoint_config_name = config_dict['EndpointConfigName']
# get only endpoint configs that contain name_check in them and save names to a list
if name_check in endpoint_config_name:
names_list.append(endpoint_config_name)
# remove the current endpoint configuration from the list (we do not want to detele this one since it is live)
names_list.remove(current_endpoint_config)
for name in names_list:
try:
smClient.delete_endpoint_config(EndpointConfigName=name)
print('Deleted endpoint configuration for %s' %name)
except:
print('INFO : No endpoint configuration was found for %s' %endpoint_config_name)
def updateEndpoint(self, endpoint_name, endpoint_config_name):
'''
Update existing endpoint with a new retrained model
'''
smClient.update_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
RetainAllVariantProperties=True)
def waitWhileCreating(self, endpoint_name):
'''
While the endpoint is being created or updated sleep for 60 seconds.
'''
# wait while creating or updating endpoint
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
while status != 'InService' and status !='Failed':
time.sleep(60)
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
# in case of a deployment failure raise an error
if status == 'Failed':
raise ValueError('Endpoint failed to deploy')
if __name__=="__main__":
deployer = Deployer('MyDeployedModel', deployRetrained=True)
deployer.deploy()
Final comments :
The sagemaker documentation mentions all this but fails to state that you can provide an 'entry_point' to the create_model method as well as a 'source_dir' for inference dependencies (e.g. normalization artifacts). It can be done as seen in PrimaryContainer argument.
my fileManager.py script just contains basic functions to make tar files, upload and download to and from my S3 paths. To simplify the class, I have not included them in.
The method deleteOutdatedEndpointConfig may seem like a bit of an overkill with unnecessary loops and checks, I do so because I have multiple endpoint configurations to handle and wanted to remove the ones that weren't live AND contain the string name_check (I do not know the exact name of the configuration since there is a datetime suffix). Feel free to simplify it or remove it all together.
Hope it helps.
In your model_name you specify the name of a SageMaker Model object where you can specify the image_uri, model_data etc.

how to launch a cloud dataflow pipeline when particular set of files reaches Cloud storage from a google cloud function

I have a requirement to create a cloud function which should check for a set of files in a GCS bucket and if all of those files arrives in GCS bucket then only it should launch the dataflow templates for all those files.
My existing cloud function code launches cloud dataflow for each file which comes into a GCS bucket. It runs different dataflows for different files based on naming convention. This existing code is working fine but my intention is not to trigger dataflow for each uploaded file directly.
It should check for set of files and if all the files arrives, then it should launch dataflows for those files.
Is there a way to do this using Cloud Functions or is there an alternative way of achieving the desired result ?
from googleapiclient.discovery import build
import time
def df_load_function(file, context):
filesnames = [
'Customer_',
'Customer_Address',
'Customer_service_ticket'
]
# Check the uploaded file and run related dataflow jobs.
for i in filesnames:
if 'inbound/{}'.format(i) in file['name']:
print("Processing file: {filename}".format(filename=file['name']))
project = 'xxx'
inputfile = 'gs://xxx/inbound/' + file['name']
job = 'df_load_wave1_{}'.format(i)
template = 'gs://xxx/template/df_load_wave1_{}'.format(i)
location = 'asia-south1'
dataflow = build('dataflow', 'v1b3', cache_discovery=False)
request = dataflow.projects().locations().templates().launch(
projectId=project,
gcsPath=template,
location=location,
body={
'jobName': job,
"environment": {
"workerRegion": "asia-south1",
"tempLocation": "gs://xxx/temp"
}
}
)
# Execute the dataflowjob
response = request.execute()
job_id = response["job"]["id"]
I've written the below code for the above functionality. The cloud function is running without any error but it is not triggering any dataflow. Not sure what is happening as the logs has no error.
from googleapiclient.discovery import build
import time
import os
def df_load_function(file, context):
filesnames = [
'Customer_',
'Customer_Address_',
'Customer_service_ticket_'
]
paths =['Customer_','Customer_Address_','Customer_service_ticket_']
for path in paths :
if os.path.exists('gs://xxx/inbound/')==True :
# Check the uploaded file and run related dataflow jobs.
for i in filesnames:
if 'inbound/{}'.format(i) in file['name']:
print("Processing file: {filename}".format(filename=file['name']))
project = 'xxx'
inputfile = 'gs://xxx/inbound/' + file['name']
job = 'df_load_wave1_{}'.format(i)
template = 'gs://xxx/template/df_load_wave1_{}'.format(i)
location = 'asia-south1'
dataflow = build('dataflow', 'v1b3', cache_discovery=False)
request = dataflow.projects().locations().templates().launch(
projectId=project,
gcsPath=template,
location=location,
body={
'jobName': job,
"environment": {
"workerRegion": "asia-south1",
"tempLocation": "gs://xxx/temp"
}
}
)
# Execute the dataflowjob
response = request.execute()
job_id = response["job"]["id"]
else:
exit()
Could someone please help me with the above python code.
Also my file names contain current dates at the end as these are incremental files which I get from different source teams.
If I'm understanding your question correctly, the easiest thing to do is to write basic logic in your function that determines if the entire set of files is present. If not, exit the function. If yes, run the appropriate Dataflow pipeline. Basically implementing what you wrote in your first paragraph as Python code.
If it's a small set of files it shouldn't be an issue to have a function run on each upload to check set completeness. Even if it's, for example, 10,000 files a month the cost is extremely small for this service assuming:
Your function isn't using lots of bandwidth to transfer data
The code for each function invocation doesn't take a long time to run.
Even in scenarios where you can't meet these requirements Functions is still pretty cheap to run.
If you're worried about costs I would recommend checking out the Google Cloud Pricing Calculator to get an estimate.
Edit with updated code:
I would highly recommend using the Google Cloud Storage Python client library for this. Using os.path likely won't work as there are additional underlying steps required to search a bucket...and probably more technical details there than I fully understand.
To use the Python client library, add google-cloud-storage to your requirements.txt. Then, use something like the following code to check the existence of an object. This example is based off an HTTP trigger, but the gist of the code to check object existence is the same.
from google.cloud import storage
import os
def hello_world(request):
# Instantiate GCS client
client = storage.client.Client()
# Instantiate bucket definition
bucket = storage.bucket.Bucket(client, name="bucket-name")
# Search for object
for file in filenames:
if storage.blob.Blob(file, bucket) and "name_modifier" in file:
# Run name_modifier Dataflow job
elif storage.blob.Blob(file, bucket) and "name_modifier_2" in file:
# Run name_modifier_2 Dataflow job
else:
return "File not found"
This code ins't exactly what you want from a logic standpoint, but should get you started. You'll probably want to just make sure all of the objects can be found first and then move to another step where you start running the corresponding Dataflow jobs for each file if they are all found in the previous step.

Can't list bucket objects on Scaleway using boto3

I saw a few similar posts, but unfortunately none helped me.
I have an s3 bucket (on scaleway), and I'm trying to simply list all objects contained in that bucket, using boto3 s3 client as follow:
s3 = boto3.client('s3',
region_name=AWS_S3_REGION_NAME,
endpoint_url=AWS_S3_ENDPOINT_URL,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY
)
all_objects = s3.list_objects_v2(Bucket=AWS_STORAGE_BUCKET_NAME)
This simple piece of code responds with an error:
botocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the ListObjects operation: The specified key does not exist.
First, the error seems inapropriate to me since I'm not specifying any key to search. I also tried to pass a Prefix argument to this method to narrow down the search to a specific subdirectory, same error.
Second, I tried to achieve the same thing using boto3 Resource rather than Client, as follow:
session = boto3.Session(
region_name=AWS_S3_REGION_NAME,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY
)
resource = session.resource(
's3',
endpoint_url=AWS_S3_ENDPOINT_URL,
)
for bucket in resource.buckets.all():
print(bucket.name)
That code produces absolutely nothing. One weird thing that strikes me is that I don't pass the bucket_name anywhere here, which seems to be normal according to aws documentation
There's no chance that I misconfigured the client, since I'm able to use the put_object method perfectly with that same client. One strange though: when I want to put a file, I pass the whole path to put_object as Key (as I found it to be the way to go), but the object is inserted with the bucket name prepend to it. So let's say I call put_object(Key='/path/to/myfile.ext'), the object will end up to be /bucket-name/path/to/myfile.ext.
Is this strange behavior the key to my problem ? How can I investigate what's happening, or is there another way I could try to list bucket files ?
Thank you
EDIT: So, after logging the request that boto3 client is sending, I noticed that the bucket name is append to the url, so instead of requesting https://<bucket_name>.s3.<region>.<provider>/, it requests https://<bucket_name>.s3.<region>.<provider>/<bucket-name>/, which is leading to the NoSuchKey error.
I took a look into the botocore library, and I found this:
url = _urljoin(endpoint_url, r['url_path'], host_prefix)
in botocore.awsrequest line 252, where r['url_path'] contains /skichic-bucket?list-type=2. So from here, I should be able to easily patch the library core to make it work for me.
Plus, the Prefix argument is not working, whatever I pass into it I always receive the whole bucket content, but I guess I can easily patch this too.
Now it's not satisfying, since there's no issue related to this on github, I can't believe that the library contains such a bug that I'm the first one to encounter.
Does anyone can explain this whole mess ? >.<
For those who are facing the same issue, try changing your endpoint_url parameter in your boto3 client or resource instantiation from https://<bucket_name>.s3.<region>.<provider> to https://s3.<region>.<provider> ; i.e for Scaleway : https://s3.<region>.scw.cloud.
You can then set the Bucket parameter to select the bucket you want.
list_objects_v2(Bucket=<bucket_name>)
you can try this. you'll have to use your resource instead of my s3sr.
s3sr = resource('s3')
bucket = 'your-bucket'
prefix = 'your-prefix/' # if no prefix, pass ''
def get_keys_from_prefix(bucket, prefix):
'''gets list of keys for given bucket and prefix'''
keys_list = []
paginator = s3sr.meta.client.get_paginator('list_objects_v2')
# use Delimiter to limit search to that level of hierarchy
for page in paginator.paginate(Bucket=bucket, Prefix=prefix, Delimiter='/'):
keys = [content['Key'] for content in page.get('Contents')]
print('keys in page: ', len(keys))
keys_list.extend(keys)
return keys_list
keys_list = get_keys_from_prefix(bucket, prefix)
After looking more closely into things, I've found out that (a lot) of botocore services endpoints patterns starts with the bucket name. For example, here's the definition of the list_objects_v2 service:
"ListObjectsV2":{
"name":"ListObjectsV2",
"http":{
"method":"GET",
"requestUri":"/{Bucket}?list-type=2"
},
My guess is that in the standard implementation of AWS S3, there's a genericendpoint_url (which explains #jordanm comment) and the targeted bucket is reached through the endpoint.
Now, in the case of Scaleway, there's an endpoint_url for each bucket, with the bucket name contained in that url (e.g https://<bucket_name>.s3.<region>.<provider>), and any endpoint should directly starts with a bucket Key.
I made a fork of botocore where I rewrote every endpoint to remove the bucket name, if that can help someone in the future.
Thank's again to all contributors !

Is there any way or workaround to schedule Amazon Mechanical Turk HITs?

I need a specific HIT to run every Friday morning. Is there any way to do this or any workaround with an external platform (IFTTT, zapier both don't work) to do this? It seems to me like a very fundamental feature.
FWIW, I figured out how to use Zapier with MTurk. If you are on a paid plan you can leverage the AWS Lambda app to trigger some code that will create a HIT on MTurk. To do this you need an AWS account that's linked to your MTurk account. Once you have that you can create a Lambda function that contains the following code for creating a HIT on MTurk:
import json
import boto3
def lambda_handler(event, context):
print(event)
###################################
# Step 1: Create a client
###################################
endpoint = "https://mturk-requester.us-east-1.amazonaws.com"
mturk = boto3.client(
service_name='mturk',
region_name='us-east-1',
endpoint_url=endpoint)
###################################
# Step 2: Define the task
###################################
html = '''
<**********************************
My task HTML
***********************************>
'''.format(event['<my parameter>'])
question_xml = '''
<HTMLQuestion xmlns="http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2011-11-11/HTMLQuestion.xsd">
<HTMLContent><![CDATA[{}]]></HTMLContent>
<FrameHeight>0</FrameHeight>
</HTMLQuestion>'''.format(html)
task_attributes = {
'MaxAssignments': 3,
'LifetimeInSeconds': 60 * 60 * 5, # Stay active for 5 hours
'AssignmentDurationInSeconds': 60 * 10, # Workers have 10 minutes to respond
'Reward': '0.03',
'Title': '<Task title>',
'Keywords': '<keywords>',
'Description': '<Task description>'
}
###################################
# Step 3: Create the HIT
###################################
response = mturk.create_hit(
**task_attributes,
Question=question_xml
)
hit_type_id = response['HIT']['HITTypeId']
print('Created HIT {} in HITType {}'.format(response['HIT']['HITId'], hit_type_id))
Note you'll need to give the role your Lambda is using access to MTurk. From there you can create an IAM user for Zapier to use when calling your Lambda and link it to your Zapier account. Now you can setup your Action to call that Lambda function with whatever parameters you want to pass in the event.
If you want to get the results of the HIT back into your Zap it will be more complicated because Zapier isn't well suited to the asynchronous nature of MTurk HITs. I've put together a blog post on how to do this below:
https://www.daveschultzconsulting.com/2019/07/18/using-mturk-with-zapier/
There is no built-in feature in the MTurk API to accomplish scheduled launch of HITs. It must be done through custom programming.
If you are looking for a turn-key solution, scheduling can be done via TurkPrime using the Scheduled Launch Time found in tab 5 (Setup Hit and Payments)

How to get CommonPrefixes w/o usage of low-level Client in boto3?

According to this answer one can retrieve immediate "subdirectories" by querying by prefix and then obtaining CommonPrefix of the result of Client.list_objects() method.
Unfortunately, Client is a part of so-called "low level" API.
I am using different API:
session = Session(aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
s3 = session.resource('s3')
my_bucket = s3.Bucket(bucket_name)
result = my_bucket.objects.filter(Prefix=prefix)
and this method does not return dictionary.
Is it possible to obtain common prefixes with higher level API in boto3?
As noted in this answer, it seems that the Resource doesn't handle Delimiter well. It is often annoying, when your entire stack relies on Resource, to be told that, ah, you should have instantiated a Client instead...
Fortunately, a Resource object, such as your Bucket above, contains a client as well.
So, instead of the last line in your code sample, do:
paginator = my_bucket.meta.client.get_paginator('list_objects')
for resp in paginator.paginate(Bucket=my_bucket.name, Prefix=prefix, Delimiter='/', ...):
for x in resp.get('CommonPrefixes', []):
print(x['Prefix'])
You can access client from session.
session.client('s3').list_objects(Bucket=bucket_name, Prefix= prefix)

Resources