Python3 threading on AWS Lambda - python-3.x

I am using flask, and have a route that sends emails to people. I am using threading to send them faster. When i run it on my local machine it takes about 12 seconds to send 300 emails. But when I run it on lambda thorough API Gateway it times out.
here's my code:
import threading
def async_mail(app, msg):
with app.app_context():
mail.send(msg)
def mass_mail_sender(order, user, header):
html = render_template('emails/pickup_mail.html', bruger_order=order.ordre, produkt=order.produkt)
msg = Message(recipients=[user],
sender=('Sender', 'infor#example.com'),
html=html,
subject=header)
thread = threading.Thread(target=async_mail, args=[create_app(), msg])
thread.start()
return thread
#admin.route('/lager/<url_id>/opdater', methods=['POST'])
def update_stock(url_id):
start = time.time()
if current_user.navn != 'Admin':
abort(403)
if request.method == 'POST':
produkt = Produkt.query.filter_by(url_id=url_id)
nyt_antal = int(request.form['bestilt_hjem'])
produkt.balance = nyt_antal
produkt.bestilt_hjem = nyt_antal
db.session.commit()
orders = OrdreBog.query.filter(OrdreBog.produkt.has(func.lower(Produkt.url_id == url_id))) \
.filter(OrdreBog.produkt_status == 'Ikke klar').all()
threads = []
for order in orders:
if order.antal <= nyt_antal:
nyt_antal -= order.antal
new_thread = mass_mail_sender(order, order.ordre.bruger.email, f'Din bog {order.produkt.titel} er klar til afhentning')
threads.append(new_thread)
order.produkt_status = 'Klar til afhentning'
db.session.commit()
for thread in threads:
try:
thread.join()
except Exception:
pass
end = time.time()
print(end - start)
return 'Emails sendt'
return ''

AWS lambda functions designed to run functions within these constraints:
Memory– The amount of memory available to the function during execution. Choose an amount between 128 MB and 3,008 MB in 64-MB increments.
Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU (one vCPU-second of credits per second).
Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
To put this in your mail sending multi threaded python code. The function will terminate automatically either when your function execution completes successfully or it reaches configured timeout.
I understand you want single python function to send n number of emails "concurrently". To achieve this with lambda try the "Concurrency" setting and trigger your lambda function through a local script, S3 hosted html/js triggered by cloud watch or EC2 instance.
Concurrency – Reserve concurrency for a function to set the maximum number of simultaneous executions for a function. Provision concurrency to ensure that a function can scale without fluctuations in latency.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html
Important: All above settings will affect your lambda function execution cost significantly. So plan and compare before applying.
If you need any more help, let me know.
Thank you.

Related

Simpy resource unavialbality

I am trying to make resources unavailable for a certain time in simpy. The issue is with timeout I find the resource is still active and serving during the time it should be unavailable. Can anyone help me with this in case you have encountered such a problem. Thanks a lot!
import numpy as np
import simpy
def interarrival():
return(np.random.exponential(10))
def servicetime():
return(np.random.exponential(20))
def servicing(env, servers_1):
i = 0
while(True):
i = i+1
yield env.timeout(interarrival())
print("Customer "+str(i)+ " arrived in the process at "+str(env.now))
state = 0
env.process(items(env, i, servers_array, state))
def items(env, customer_id, servers_array, state):
with servers_array[state].request() as request:
yield request
t_arrival = env.now
print("Customer "+str(customer_id)+ " arrived in "+str(state)+ " at "+str(t_arrival))
yield env.timeout(servicetime())
t_depart = env.now
print("Customer "+str(customer_id)+ " departed from "+str(state)+ " at "+str(t_depart))
if (state == 1):
print("Customer exists")
else:
state = 1
env.process(items(env, customer_id, servers_array, state))
def delay(env, servers_array):
while(True):
if (env.now%1440 >= 540 and env.now <= 1080):
yield(1080 - env.now%1440)
else:
print(str(env.now), "resources will be blocked")
resource_unavailability_dict = dict()
resource_unavailability_dict[0] = []
resource_unavailability_dict[1] = []
for nodes in resource_unavailability_dict:
for _ in range(servers_array[nodes].capacity):
resource_unavailability_dict[nodes].append(servers_array[nodes].request())
print(resource_unavailability_dict)
for nodes in resource_unavailability_dict:
yield env.all_of(resource_unavailability_dict[nodes])
if (env.now < 540):
yield env.timeout(540)
else:
yield env.timeout((int(env.now/1440)+1)*1440+540 - env.now)
for nodes in resource_unavailability_dict:
for request in resource_unavailability_dict[nodes]:
servers_array[nodes].release(request)
print(str(env.now), "resources are released")
env = simpy.Environment()
servers_array = []
servers_array.append(simpy.Resource(env, capacity = 5))
servers_array.append(simpy.Resource(env, capacity = 7))
env.process(servicing(env, servers_array))
env.process(delay(env,servers_array))
env.run(until=2880)
The code is given above. Actually, I have two nodes 0 and 1 where server capacities are 5 and 7 respectively. The servers are unavailable before 9AM (540 mins from midnight) and after 6 PM everyday. I am trying to create the unavailability using timeout but not working. Can you suggest how do I modify the code to incorporate it.
I am getting the error AttributeError: 'int' object has no attribute 'callbacks'which I can't figure out why ?
So the problem with simpy resources is the capacity is a read only attribute. To get around this you need something to seize and hold the resource off line. So in essence, I have two types of users, the ones that do "real work" and the ones that control the capacity. I am using a simple resource, which means that the queue at the schedule time will get processed before the capacity change occurs. Using a priority resource means the current users of a resource can finish their processes before the capacity change occurs , or you can use a pre-emptive resource to interrupt users with resources at the scheduled time. here is my code
"""
one way to change a resouce capacity on a schedule
note the the capacity of a resource is a read only atribute
Programmer: Michael R. Gibbs
"""
import simpy
import random
def schedRes(env, res):
"""
Performs maintenance at time 100 and 200
waits till all the resources have been seized
and then spend 25 time units doing maintenace
and then release
since I am using a simple resource, maintenance
will wait of all request that are already in
the queue when maintenace starts to finish
you can change this behavior with a priority resource
or pre-emptive resource
"""
# wait till first scheduled maintenance
yield env.timeout(100)
# build a list of requests for each resource
# then wait till all requests are filled
res_maint_list = []
print(env.now, "Starting maintenance")
for _ in range(res.capacity):
res_maint_list.append(res.request())
yield env.all_of(res_maint_list)
print(env.now, "All resources seized for maintenance")
# do maintenance
yield env.timeout(25)
print(env.now, "Maintenance fisish")
# release all the resources
for req in res_maint_list:
res.release(req)
print(env.now,"All resources released from maint")
# wait till next scheduled maintenance
dur_to_next_maint = 200 -env.now
if dur_to_next_maint > 0:
yield env.timeout(dur_to_next_maint)
# do it all again
res_maint_list = []
print(env.now, "Starting maintenance")
for _ in range(res.capacity):
res_maint_list.append(res.request())
yield env.all_of(res_maint_list)
print(env.now, "All resources seized for maintenance")
yield env.timeout(25)
print(env.now, "Maintenance fisish")
for req in res_maint_list:
res.release(req)
print(env.now,"All resources released from maint")
def use(env, res, dur):
"""
Simple process of a user seizing a resource
and keeping it for a little while
"""
with res.request() as req:
print(env.now, f"User is in queue of size {len(res.queue)}")
yield req
print(env.now, "User has seized a resource")
yield env.timeout(dur)
print(env.now, "User has released a resource")
def genUsers(env,res):
"""
generate users to seize resources
"""
while True:
yield env.timeout(10)
env.process(use(env,res,21))
# set up
env = simpy.Environment()
res = simpy.Resource(env,capacity=2) # may want to use a priority or preemtive resource
env.process(genUsers(env,res))
env.process(schedRes(env, res))
# start
env.run(300)
One way to do this is with preemptive resources. When it is time to make resources unavailable, issue a bunch of requests with the highest priority to seize idle resources, and to preempt resources currently in use. These requests would then release the resources when its time to make the resources available again. Note that you will need to add some logic on how the preempted processes resume once the resources become available again. If you do not need to preempt processes, you can just use priority resources instead of preemptive resources

Locust Performance different from time() function

I wrote a FastAPI and try to perform load tests using different tools. I have found that the performance from Locust is vastly different from time() python function:
Locust shows min=17ms, max=2469ms, 99%=2000ms
time() function shows min()=3ms, max=1739ms
Can someone please shed a light on why is that? Which one is more accurate?
Below are my programs:
Fast API Function:
app = FastAPI()
#app.post('/predict/')
def predict(request: PredictRequest):
logger.info('Invocation triggered')
start_time = time.time()
response = adapter.predict(request.dict())
latency_time = (time.time() - start_time) * 1000
latency_logger.info(f'Predict call latency: {latency_time} ms')
return response
Locust parameters:
-u 500 -t 10 -r 500
Locust File:
class User(HttpUser):
wait_time = between(1, 2.5)
host = "http://0.0.0.0:80"
#task
def generate_predict(self):
self.client.post("/predict/",
json={"cid": [],
"user_id": 5768586,
"store_ids": [2725, 2757],
"device_type": "ios"},
name='predict')
Locust Output:
Locust and time are measuring two different things. time is measuring how long it takes to run only your adapter.predict function, server side. Locust measures the time it takes a client to get a response from your server route, which includes not only your adapter.predict call but also who knows what all else before and after that. "Which is more accurate" depends on what it is you're trying to measure. If you just want to know how long it takes to call adapter.predict, then time will be more accurate. If you want to know how long it takes a client to get the results of your /predict route, Locust is more accurate.

Multiprocess : Persistent Pool?

I have code like the one below :
def expensive(self,c,v):
.....
def inner_loop(self,c,collector):
self.db.query('SELECT ...',(c,))
for v in self.db.cursor.fetchall() :
collector.append( self.expensive(c,v) )
def method(self):
# create a Pool
#join the Pool ??
self.db.query('SELECT ...')
for c in self.db.cursor.fetchall() :
collector = []
#RUN the whole cycle in parallel in separate processes
self.inner_loop(c, collector)
#do stuff with the collector
#! close the pool ?
both the Outer and the Inner loop are thousands of steps ...
I think I understand how to run a Pool of couple of processes.
All the examples I found show that more or less.
But in my case I need to lunch a persistent Pool and then feed the data (c-value). Once a inner-loop process has finished I have to supply the next-available-c-value.
And keep the processes running and collect the results.
How do I do that ?
A clunky idea I have is :
def method(self):
ws = 4
with Pool(processes=ws) as pool :
cs = []
for i,c in enumerate(..) :
cs.append(c)
if i % ws == 0 :
res = [pool.apply(self.inner_loop, (c)) for i in range(ws)]
cs = []
collector.append(res)
will this keep the same pool running !! i.e. not lunch new process every time ?i
Do I need 'if i % ws == 0' part or I can use imap(), map_async() and the Pool obj will block the loop when available workers are exhausted and continue when some are freed ?
Yes, the way that multiprocessing.Pool works is:
Worker processes within a Pool typically live for the complete duration of the Pool’s work queue.
So simply submitting all your work to the pool via imap should be sufficient:
with Pool(processes=4) as pool:
initial_results = db.fetchall("SELECT c FROM outer")
results = [pool.imap(self.inner_loop, (c,)) for c in initial_results]
That said, if you really are doing this to fetch things from the DB, it may make more sense to move more processing down into that layer (bring the computation to the data rather than bringing the data to the computation).

Is there some way to speed up requests and/or timrout errors when using the python requests library?

I am doing an assignment to send requests to 1000 specific websites (some of which seem to no longer exist) in Python (3) with the HEAD method and report statistics about their response headers. The script has to finish in five minutes. Obviously you can make requests take less time by reducing the timeout, but the more you reduce the timeout the more timeout errors there are, and catching them seems to be very expensive. For example, when the timeout was 0.3 seconds there were 700 good requests and 300 timeout errors, and the total time spent catching the timeout errors was by itself greater than five minutes. Reducing the timeout does reduce the time to catch each timeout error, because requests has to wait for the timeout before throwing the error, but the number of timeouts also increases. I was only able to get the total time spent catching timeout errors below five minutes at timeout=0.05 and timeout=0.03, but the total time including the time spent on requests was still greater than five minutes. timeout=0.02 resulted in only 20 sites being reachable with a total error handling time of 5:17, and timeout=0.01 resulted in no sites reachable. The person who gave the assignment insists that it is possible, so I must be doing something wrong. I tried using a requests.Session object but that didn't result in any noticeable speedup. What else can I do to speed things up?
The real answer is to use asynchronous HTTP requests. But in order to ethically answer this question I must insist on a low limit per domain for simultaneous requests, otherwise you can overload servers (and get blacklisted).
Below is an (untested) example implementation using aiohttp that supports a configurable number of maximum parallelism as well as maximum parallelism per domain.
import aiohttp
import asyncio
from collections import Counter
NUM_PARALLEL = 64
MAX_PARALLEL_PER_DOMAIN = 4
TIMEOUT = aiohttp.ClientTimeout(total=60)
async def fetch_url(url, session):
try:
async with session.get(url) as response:
# Whatever you want.
return {
"url": url,
"status": response.status,
"content-type": response.headers["content-type"]
}
except aiohttp.ServerTimeoutError:
return {"url": url, "status": "timeout"}
except Exception as e:
return {"url": url, "status": "uncaught_exception", "exception": e}
domain_num_inflight = Counter()
domain_semaphore = {}
async def worker(urls, results):
async with aiohttp.ClientSession(timeout=TIMEOUT) as session:
while urls:
url = urls.pop()
domain = urlparse(url).netloc
if domain_num_inflight[domain] == 0:
domain_semaphore[domain] = asyncio.Semaphore(MAX_PARALLEL_PER_DOMAIN)
domain_num_inflight[domain] += 1
async with domain_semaphore[domain]:
results.append(await fetch_url(url, session))
domain_num_inflight[domain] -= 1
if domain_num_inflight[domain] == 0: # Prevent memory leak.
del domain_semaphore[domain]
del domain_num_inflight[domain]
urls = [...]
worklist = urls[:]
results = []
workers = [worker(worklist, results) for _ in range(NUM_PARALLEL)]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*workers))
print(results)

Python asyncio wait() with cumulative timeout

I am writing a job scheduler where I schedule M jobs across N co-routines (N < M). As soon as one job finishes, I add a new job so that it can start immediately and run in parallel with the other jobs. Additionally, I would like to ensure that no single job takes more than a certain fixed amount of time. Any jobs that take too long should be cancelled. I have something pretty close, like this:
def update_run_set(waiting, running, max_concurrency):
number_to_add = min(len(waiting), max_concurrency - len(running))
for i in range(0, number_to_add):
next_one = waiting.pop()
running.add(next_one)
async def _run_test_invocations_asynchronously(jobs:List[MyJob], max_concurrency:int, timeout_seconds:int):
running = set() # These tasks are actively being run
waiting = set() # These tasks have not yet started
waiting = {_run_job_coroutine(job) for job in jobs}
update_run_set(waiting, running, max_concurrency)
while len(running) > 0:
done, running = await asyncio.wait(running, timeout=timeout_seconds,
return_when=asyncio.FIRST_COMPLETED)
if not done:
timeout_count = len(running)
[r.cancel() for r in running] # Start cancelling the timed out jobs
done, running = await asyncio.wait(running) # Wait for cancellation to finish
assert(len(done) == timeout_count)
assert(len(running) == 0)
else:
for d in done:
job_return_code = await d
if len(waiting) > 0:
update_run_set(waiting, running, max_concurrency)
assert(len(running) > 0)
The problem here is that say my timeout is 5 seconds, and I'm scheduling 3 jobs across 4 cores. Job A takes 2 seconds, Job B takes 6 seconds and job C takes 7 seconds.
We have something like this:
t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7
-------|-------|-------|-------|-------|-------|-------|-------|
AAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
However, at t=2 the asyncio.await() call returns because A completed. It then loops back up to the top and runs again. At this point B has already been running for 2 seconds, but since it starts the countdown over, and only has 4 seconds remaining until it completes, B will appear to be successful. So after 4 seconds we return again, B is successful, then we start the loop over and now C completes.
How do I make it so that B and C both fail? I somehow need the time to be preserved across calls to asyncio.wait().
One idea that I had is to do my own bookkeeping of how much time each job is allowed to continue running, and pass the minimum of these into asyncio.wait(). Then when something times out, I can cancel only those jobs whose time remaining was equal to the value I passed in for timeout_seconds.
This requires a lot of manual bookkeeping on my part though, and I can't help but wonder about floating point problems which cause me to decide that it's not time to cancel a job even though it really is). So I can't help but think that there's something easier. Would appreciate any ideas.
You can wrap each job into a coroutine that checks its timeout, e.g. using asyncio.wait_for. Limiting the number of parallel invocations could be done in the same coroutine using an asyncio.Semaphore. With those two combined, you only need one call to wait() or even just gather(). For example (untested):
# Run the job, limiting concurrency and time. This code could likely
# be part of _run_job_coroutine, omitted from the question.
async def _run_job_with_limits(job, sem, timeout):
async with sem:
try:
await asyncio.wait_for(_run_job_coroutine(job), timeout)
except asyncio.TimeoutError:
# timed out and canceled, decide what you want to return
pass
async def _run_test_invocations_async(jobs, max_concurrency, timeout):
sem = asyncio.Semaphore(max_concurrency)
return await asyncio.gather(
*(_run_job_with_limits(job, sem, timeout) for job in jobs)
)

Resources