In gunicorn app, I need to allow only certain number of connections and reject the rest with error. I have this testing config:
timeout = 60
graceful_timeout = 60
workers = 1
worker_connections = 1
backlog = 1
worker_class = "gevent"
max_requests = 1000
max_requests_jitter = 42
preload_app = True
bind = "0.0.0.0:8080"
loglevel = "debug"
accesslog = "-" # Send access log to stdout.
which I expected should result in accepting only one connection at a time and rejecting the rest. But when I send multiple requests at once, they are queued and processed one by one. For testing purposes, it takes 10 seconds to process one request to make sure there is one active connection.
Using gunicorn version 19.9.0
Related
We have a QLDB ingestion process that consists of a Lambda function triggered by SQS.
We want to make sure our pipeline is airtight so if a failure or error occurs during driver execution, we don't lose that data if the data fails to commit to QLDB.
In our testing we noticed that if there's a failure within the Lambda itself, it automatically resends the message to the queue, but if the driver fails, the data is lost.
I understand that the default behavior for the driver is to retry four times after the initial failure. My question is, if I wrap qldb_driver.execute_lambda() in a try statement, will that allow the driver to retry upon failure or will it instantly return as a failure and be handled by the except statement?
Here is how I've written the first half of the function:
import json
import boto3
import datetime
from pyqldb.driver.qldb_driver import QldbDriver
from utils import upsert, resend_to_sqs, delete_from_sqs
queue_url = 'https://sqs.XXX/'
sqs = boto3.client('sqs', region_name='us-east-1')
ledger = 'XXXXX'
table = 'XXXXX'
qldb_driver = QldbDriver(ledger_name = ledger, region_name='us-east-1')
def lambda_handler(event, context):
# Simple iterable to identify messages
i = 0
# Error flag
error = False
# Empty list to store message send status as well as body or receipt_handle
batch_messages = []
for record in event['Records']:
payload = json.loads(record["body"])
payload['update_ts'] = str(datetime.datetime.now())
try:
qldb_driver.execute_lambda(lambda executor: upsert(executor, ledger = ledger, table_name = table, data = payload))
# If the message sends successfully, give it status 200 and add the recipt_handle to our list
# so in case an error occurs later, we can delete this message from the queue.
message_info = {f'message_{i}': 200, 'receiptHandle': record['receiptHandle']}
batch_messages.append(message_info)
except Exception as e:
print(e)
# Flip error flag to True
error = True
# If the commit fails, set status 400 and add the message's body to our list.
# This will allow us to send the message back to the queue during error handling.
message_info = {f'message_{i}': 400, 'body': record['body']}
batch_messages.append(message_info)
i += 1
Assuming that this try/except allows the driver to retry upon failure, I've written an additional process to record message data from our batch to delete successful commits and send failures back to the queue:
# Begin error handling
if error:
count = 0
for j in range(len(batch_messages)):
# If a message was sent successfully delete it from the queue
if batch_messages[j][f'message_{j}'] == 200:
receipt_handle = batch_messages[j]['receiptHandle']
delete_from_sqs(sqs, queue_url, receipt_handle)
# If the message failed to commit to QLDB, send it back to the queue
else:
body = batch_messages[j]['body']
resend_to_sqs(sqs, queue_url, body)
count += 1
print(f"ERROR(S) DETECTED - {count} MESSAGES RETURNED TO QUEUE")
else:
print("BATCH PROCESSING SUCCESSFUL")
Thank you for your insight!
The qldb python driver can be configured for more or less retries if you need. I'm not sure if you wanted it to only try 1 time, or if you were asking that the driver will try the transaction 4 times before triggering the try/catch exception. The driver will still try up-to 4 times, before throwing the except.
You can follow the example here to modify the retry amount. Also, note the default retry timeout is a random ms jitter and not exponential. With QLDB, you shouldn't need to wait long periods to retry since it uses optimistic concurrency control.
Also, with your design of throwing the failed message back into the queue...you might want to consider throwing it into a dead letter queue. Dead-letter queues would prevent trouble messages from retrying indefinitely, unless thats your goal.
(edit/additionally)
Observe that the qldb driver exhausting retires before raising an exception.
I have a Django application that works fine when running it with python manage.py runserver. After I added uwsgi, I frequently started to encounter the error Cannot operate on a closed database. The very same endpoint that raises this error works fine if I call it manually from a browser. The errors occur usually after a few hundreds / thousands call (coming really fast) which are made by another service.
Here's my uwsgi settings:
[uwsgi]
chdir = ./src
http = :8000
enable-threads = false
master = true
module = config.wsgi:application
workers = 5
thunder-lock = true
vacuum = true
workdir = ./src
add-header = Connection: Keep-Alive
http-keepalive = 65000
max-requests = 50000
max-requests-delta = 10000
max-worker-lifetime = 360000000000 ; Restart workers after this many seconds
reload-on-rss = 2048 ; Restart workers after this much resident memory
worker-reload-mercy = 60 ; How long to wait before forcefully killing workers
lazy-apps = true
ignore-sigpipe = true
ignore-write-errors = true
http-auto-chunked = true
disable-write-exception = true
Note: this is a private project and it will never reach production. My goal is to have a fast way for django to handle multiple requests using sqlite. Even a dirty solution would be acceptable.
I would like to use OpenResty with Lua interpreter.
I can't make the OpenResty framework to handle two concurrent requests to two separate endpoints. I simulate that one request is doing some hard calculations by running in a long loop:
local function busyWaiting()
local self = coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
coroutine.yield(self)
end
end
local self = coroutine.running()
local thread = ngx.thread.spawn(busyWaiting)
while (coroutine.status(thread) ~= 'zombie') do
coroutine.yield(self)
end
ngx.say('test1!')
The other endpoint just sends response immediately.
ngx.say('test2')
I send a request to the first endpoint and then I send a second request to the second endpoint. However, the OpenResty is blocked by the first request and so I receive both responses almost at the same time.
Setting nginx parameter worker_processes 1; to higher number does not help either and I would like to have only single worker process anyway.
What is the proper way to let OpenResty handle additional requests and not to get blocked by the first request?
local function busyWaiting()
local self = ngx.coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
ngx.coroutine.yield(self)
end
end
local thread = ngx.thread.spawn(busyWaiting)
while (ngx.coroutine.status(thread) ~= 'dead') do
ngx.coroutine.resume(thread)
end
ngx.say('test1!')
When configuring watchers, what would be the purpose of including both of these settings under a watching:
singleton = True
numprocess = 1
The documentation states that setting singleton has the following effect:
singleton:
If set to True, this watcher will have at the most one process. Defaults to False.
I read that as negating the need to specify numprocesses however in the github repository they provide an example:
https://github.com/circus-tent/circus/blob/master/examples/example6.ini
Included here as well, where they specify both:
[circus]
check_delay = 5
endpoint = tcp://127.0.0.1:5555
pubsub_endpoint = tcp://127.0.0.1:5556
stats_endpoint = tcp://127.0.0.1:5557
httpd = True
debug = True
httpd_port = 8080
[watcher:swiss]
cmd = ../bin/python
args = -u flask_app.py
warmup_delay = 0
numprocesses = 1
singleton = True
stdout_stream.class = StdoutStream
stderr_stream.class = StdoutStream
So I would assume they do something different and in some way work together?
numprocess is the initial number of process for a given watcher. In the example you provided it is set to 1, but a user can typically add more processes as needed.
singleton would only allow a maxiumum of 1 process running for a given watcher, so it would forbid you from increment the number of processes dynamically.
The code below from circus test suite describes it well ::
#tornado.testing.gen_test
def test_singleton(self):
# yield self._stop_runners()
yield self.start_arbiter(singleton=True, loop=get_ioloop())
cli = AsyncCircusClient(endpoint=self.arbiter.endpoint)
# adding more than one process should fail
yield cli.send_message('incr', name='test')
res = yield cli.send_message('list', name='test')
self.assertEqual(len(res.get('pids')), 1)
yield self.stop_arbiter()
So my code works fine except for when iterating through arrays and sending a response to multiple chat clients the latency between each client's reception of the response is nearly a second. I'm running the server and client on my computer so there shouldn't be any real latency, right? I know ruby isn't this slow. Also, why does my computer's fan spin up when running this? There's a bit more if it would be helpful to include it.
# Creates a thread per client that listens for any messages and relays them to the server viewer and all the other clients.
create_client_listener_threads = Thread.new do
x = nil
client_quantity = 0
# Loops indefinitely
until x != nil
#Checks to see if clients have joined since last check.
if #client_join_order_array.size > client_quantity
# Derives number of new arrivals.
number_of_new_arrivals = #client_join_order_array.size - client_quantity
# Updates number of clients in client_quantity.
client_quantity = #client_join_order_array.size
if number_of_new_arrivals != 0
# Passes new arrivals into client for their thread creation.
#client_join_order_array[-1 * number_of_new_arrivals..-1].each do |client|
# Creates thread to handle receiving of each client's text.
client_thread = Thread.new do
loop do
text = client.acception.gets
# Displays text for server viewer.
puts "#{client.handle} # #{Time.now} said: #{text}"
#client_hash.each_value do |value|
if value.handle != client.handle
# Displays text for everyone except server viewr and person who spoke.
value.acception.puts "#{client.handle} # #{Time.now} said: #{text}"
end
end
end
end
end
end
end
end
end
Instead of testing if #client_join_order_array.size > client_quantity, and doing nothing except smoke the CPU if it is false, you should be accepting the new connection at this point, and blocking until there is one. In other words move the code that accepts connections and adds them to the array here.