how to close python process on program end - python-3.x

I fail to close a serial connection that runs in a process properly at the end of the program. (on windows/VSCode and Ctrl-C)
I get an error message and most of the time the port is already opened at the next start of the program.
Do I have to finish the run process first?
class serialOne(Process):
def __init__(self, serial_port, debug, baudrate=57600, timeout=1):
...
def terminate(self):
print("close ports")
self.active = False
self.ser.close()
def run(self):
while self.active:
self.initCom()
self.readCom()
time.sleep(0.005)
def main():
global processList
global debug
while True:
if debug == True:
print("main")
time.sleep(1.0)
for process in processList:
process.terminate()
and my main:
def main():
global processList
global debug
while True:
if debug == True:
print("main") # actually doing nothing
time.sleep(1.0)
for process in processList:
process.terminate()
that's the error message:
Process serialOne-1:
Traceback (most recent call last):
File "C:\Users\dgapp\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
Traceback (most recent call last):
File "e:\_python\rfid_jacky\simple_multiprocess_rfid_02.py", line 129, in run
time.sleep(0.005)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\ptvsd_launcher.py", line 45, in <module>
KeyboardInterrupt
main(ptvsdArgs)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\__main__.py", line 265, in main
wait=args.wait)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\__main__.py", line 258, in handle_args
debug_main(addr, name, kind, *extra, **kwargs)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_local.py", line 45, in debug_main
run_file(address, name, *extra, **kwargs)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_local.py", line 79, in run_file
run(argv, addr, **kwargs)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_local.py", line 140, in _run
_pydevd.main()
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 1925, in main
debugger.connect(host, port)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 1283, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 1290, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "c:\Users\dgapp\.vscode\extensions\ms-python.python-2018.12.1\pythonFiles\lib\python\ptvsd\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "e:\_python\rfid_jacky\simple_multiprocess_rfid_02.py", line 161, in <module>
main()
File "e:\_python\rfid_jacky\simple_multiprocess_rfid_02.py", line 140, in main
time.sleep(1.0)
KeyboardInterrupt

When you press Ctrl+C, a KeyboardInterrupt exception is thrown, and interrupts your infinite sleep loop. But since you don't catch this exception, the code after this loop (with process.terminate()) is never called, which probably causes your issue.
So you have several options:
catch KeyboardInterrupt and use that to exit the inifite loop:
def main():
global processList
global debug
try:
while True:
if debug == True:
print("main") # actually doing nothing
time.sleep(1.0)
except KeyboardInterrupt:
pass
for process in processList:
process.terminate()
Which is simple and very readable.
register an exit handler that will be run when your program exits:
import atexit
#atexit.register
def shutdown():
global processList
for process in processList:
process.terminate()
def main():
global debug
while True:
if debug == True:
print("main") # actually doing nothing
time.sleep(1.0)
Which is more reliable since it will work even if your process is terminated by another signal.

The KeyboardInterrupt happens when the user hits the interrupt key.
A simple solution would be to catch the exception:
while True:
if debug == True:
print("main") # actually doing nothing
try:
# do your things
except KeyboardInterrupt:
print("program was interrupted by user")
break
You could also use the finally keyword to properly end your program:
try:
while True:
# do your things
except KeyboardInterrupt:
print("program was interrupted by user")
break
finally:
close() # this will always happen, even if an exception was raised

Related

OS Error while doing requests for images with multiple threads in python

I'm making a program that gets info from a website about games, among that info, images, since i'm trying to download info of all games on that website, using a single thread with a 1Mbps connection would be very painful, so i decided to take action against this issue and programmed to spawn a thread for each letter of the alphabet that a game starts with, (games can be filtered by such). So, inside the function that downloads the corresponding image to certain game, while i have more than one thread, at some point in execution (sooner or later) an error is raised, then inside the except block that handles it, another exception is raised, and so on, over and over... this immediately causes threads to come to an end, but the fact is that, when i'm left with only a petty single thread to rely on, that thread goes on very well without giving any trouble.
Question:
How to solve this, and why is it happening?
Deduction:
I think that, when multiple threads get to requests.get line inside the download_image function (the function where the very problem must lie), maybe it fails because of multiple requests... that is as far as i can try to guess.
I really don't have the least idea of how to solve this, that being said, i would appreciate any help, thanks in advance.
I got rid of all the functions not having to do anything with the problem described above.
I spawn the threads at program's end, and each thread target function is named get_all_games_from_letter.
CODE
from bs4 import BeautifulSoup
from string import ascii_lowercase
from datetime import date
from vandal_constants import *
from PIL import Image
from requests.exceptions import ConnectionError
from exceptions import NoTitleException
from validator_collection import url as url_check
from rawgpy import RAWG
from io import BytesIO
import traceback
import requests
import threading
import sqlite3
import concurrent.futures
### GLOBALS #####
FROM_RAWG = False
INSERT_SQL = ''
# CONSTANTS ########
rawg = RAWG('A Collector')
#################
def download_image(tag=None, game=None, rawg_game=None):
if tag:
return sqlite3.Binary(requests.get(url).content) if (url := tag['data-src']) else None
elif game:
global FROM_RAWG
img_tag = game.select_one(IMG_TAG_SELECTOR)
if img_tag and img_tag.get('data-src', None):
try:
if url_check(img_tag['data-src']):
return sqlite3.Binary(requests.get(img_tag['data-src']).content)
print(f"{img_tag['data-src']} is NOT a valid url")
except ConnectionError:
try:
print('Error While downloading from "Vandal.elespannol.com" website:')
traceback.print_exc()
except Exception:
print('Another Exception Ocurred')
traceback.print_exc()
except OSError:
print('Error en el Handshake parece')
traceback.print_exc()
FROM_RAWG = True
if rawg_game and getattr(rawg_game, 'background_image', None):
try:
print('Continue to download from RAWG')
return sqlite3.Binary(requests.get(rawg_game.background_image).content)
except ConnectionError:
print('Error While downloading from RAWG:')
traceback.print_exc()
return None
def prepare_game_record(game, db_games_set):
global INSERT_SQL
title = getattr(game.select_one(TITLE_TAG_SELECTOR), 'text', None)
if not title:
raise NoTitleException()
if title in db_games_set:
print(f'Already Have {title} in database')
return None
description = game.select_one(DESCRIPTION_TAG_SELECTOR)
rawg_game = None
try:
rawg_game = rawg.search(title)[0]
except Exception as err:
print('No rawg')
traceback.print_exc()
game_data = {
'nombre': title,
'descripcion': description.text if description else rawg_game.description if rawg_game else '',
'genero': genres if (genres := translate_genres(game.select_one(GENRES_TAG_SELECTOR).contents[1].strip().split(' / '))) else '',
'fondo': resize_image(img) if (img := download_image(game=game, rawg_game=rawg_game)) and not FROM_RAWG else img,
'year': None,
}
if not INSERT_SQL:
INSERT_SQL = construct_sql_insert(**game_data)
if hasattr(rawg_game, 'released'):
game_data['year'] = date.fromisoformat(rawg_game.released).year
return tuple(game_data.values())
def get_all_games_from_letter(letter):
global FROM_RAWG
counter = 36
hashes_set = set()
with sqlite3.connect('/media/l0new0lf/LocalStorage/data.db') as connection:
cursor = connection.cursor()
cursor.execute(f'SELECT nombre FROM juegos where nombre like "{letter.upper()}%"')
db_games_set = []
for row in cursor:
db_games_set.append(row[0])
db_games_set = set(db_games_set)
while True:
try:
prepared_games = []
rq = requests.get(
f'https://vandal.elespanol.com/juegos/13/pc/letra/{letter}/inicio/{counter}')
if rq:
print('Request GET: from ' +
f'https://vandal.elespanol.com/juegos/13/pc/letra/{letter}/inicio/{counter}' + ' Got Workable HTML !')
else:
print('Request GET: from ' +
f'https://vandal.elespanol.com/juegos/13/pc/letra/{letter}/inicio/{counter}' + ' Not Working !!, getting next page!')
continue
if rq.status_code == 301 or rq.status_code == 302 or rq.status_code == 303 or rq.status_code == 304:
print(f'No more games in letter {letter}\n**REDIRECTING TO **')
break
counter += 1
soup = BeautifulSoup(rq.content, 'lxml')
main_table = soup.select_one(GAME_SEARCH_RESULTS_TABLE_SELECTOR)
if hash(main_table.get_text()) not in hashes_set:
hashes_set.add(hash(main_table.get_text()))
else:
print('Repeated page ! I\'m done with this letter.')
break
game_tables = main_table.find_all(
'table', {'class': GAME_TABLES_CLASS})
print('entering game_tables loop')
for game in game_tables:
FROM_RAWG = False
try:
game_record = prepare_game_record(game, db_games_set)
except NoTitleException:
print('There is no title for this game, DISCARDING!')
continue
except Exception as err:
print('Unknown ERROR in prepare_games_record function')
traceback.print_exc()
continue
if not game_record:
continue
prepared_games.append(game_record)
print('Game successfully prepared !')
if prepared_games:
print(f'Thread, Writing to Database')
try:
cursor.executemany(INSERT_SQL, prepared_games)
connection.commit()
except Exception as err:
print(err)
print('done')
except Exception as err:
print('TRULY UNEXPECTED EXCEPTION')
print(err)
traceback.print_exc()
continue
#get_all_games_from_letter('c') You use a single thread?, no trouble at all!!
with concurrent.futures.ThreadPoolExecutor(len(ascii_lowercase)) as executor:
for letter in ascii_lowercase:
executor.submit(get_all_games_from_letter, letter)
Error Stack Trace:
Note: This is only part of the errors, but the rest is the very same.
Game successfully prepared !
Error While downloading from "Vandal.elespannol.com" website:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 996, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 366, in connect
self.sock = ssl_wrap_socket(
File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 702, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 996, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 366, in connect
self.sock = ssl_wrap_socket(
File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError(0, 'Error'))
To solve the problem what one would need is just add a global lock in order that when each thread tries to request.get an image, it has to ask in the first place if some thread is already using it. That is, downloading an image is restricted to just simultaneously one use for all threads
#######GLOBALS####
lock = threading.Lock() #Add this to globals variables
##################
def download_image(tag=None, game=None, rawg_game=None):
if tag:
return sqlite3.Binary(requests.get(url).content) if (url := tag['data-src']) else None
elif game:
global FROM_RAWG
img_tag = game.select_one(IMG_TAG_SELECTOR)
if img_tag and img_tag.get('data-src', None):
try:
if url_check(img_tag['data-src']):
lock.acquire() #acquire the lock for downloading (it means other threads must wait until the one that acquired finishes)
temp = sqlite3.Binary(requests.get(img_tag['data-src']).content)
lock.release() # release the lock when done with receiving the HttpResponse
return temp
print(f"{img_tag['data-src']} is NOT a valid url")
except ConnectionError:
try:
print('Error While downloading from "Vandal.elespannol.com" website:')
traceback.print_exc()
except Exception:
print('Another Exception Ocurred')
traceback.print_exc()
except OSError:
print('Error en el Handshake parece')
traceback.print_exc()
FROM_RAWG = True
if rawg_game and getattr(rawg_game, 'background_image', None):
try:
print('Continue to download from RAWG')
lock.acquire() #acquire the lock for downloading (it means other threads must wait until the one that acquired finishes)
temp = sqlite3.Binary(requests.get(rawg_game.background_image).content)
lock.release() # release the lock when done with
return temp
except ConnectionError:
print('Error While downloading from RAWG:')
traceback.print_exc()
return None
And done, no more troubles with downloading images in multiple threads.... but still... i don't actually know why i would need to make sure of that one request.get is made for all threads, i thought OS handles this issue by using queues or something.

How to send multiple messages in parallel and consume them at one shot using RabbitMQ?

New to RabbitMQ. After browsing through multiple sites I could construct the following program to send multiple messages in parallel.
sender.py
import pika
from threading import Thread
from queue import Queue
import multiprocessing
class MetaClass(type):
_instance = {}
def __call__(cls, *args, **kwargs):
"""
Singleton Design pattern
if the instance already exist don't create one!
"""
if cls not in cls._instance:
cls._instance[cls] = super(MetaClass, cls).__call__(*args, **kwargs)
return cls._instance[cls]
class RabbitMQConfigure(metaclass=MetaClass):
def __init__(self, queue='durable_task_queue', host="localhost", routing_key="durable_task_queue", exchange=""):
"""
Configure RabbitMQ server
"""
self.queue = queue
self.host = host
self.routing_key = routing_key
self.exchange = exchange
class RabbitMQ(Thread):
def __init__(self, rabbit_mq_server, queue1):
Thread.__init__(self)
self.rabbit_mq_server = rabbit_mq_server
self.queue1 = queue1
self._connection = pika.BlockingConnection(
pika.ConnectionParameters(self.rabbit_mq_server.host))
self._channel = self._connection.channel()
self._channel.queue_declare(queue=self.rabbit_mq_server.queue, durable=True)
def __enter__(self):
print("__enter__")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("__exit__")
self._connection.close()
def publish(self, message=""):
print("Inside publish method...")
self._channel.basic_publish(exchange=self.rabbit_mq_server.exchange,
routing_key=self.rabbit_mq_server.routing_key, body=message)
print(" [x] Sent %r" % message)
def run(self):
i = 0
while i <= 5:
# Get the work from the queue and expand the tuple
message = self.queue1.get()
print("Inside run method...")
print("Going to call publish method...")
print("Message value:" + message)
self.publish(message=message)
i += 1
if __name__ == "__main__":
rabbit_mq_server = RabbitMQConfigure(queue='durable_task_queue', host="localhost", routing_key='durable_task_queue',
exchange="")
# with RabbitMQ(rabbit_mq_server, message="Hello World!") as rabbitmq:
# rabbitmq.publish()
queue1 = Queue()
no_of_CPUs = multiprocessing.cpu_count()
messages = []
for i in range(5):
messages.append("Hello world1" + str(i))
for x in range(2):
with RabbitMQ(rabbit_mq_server, queue1) as rabbitmq:
# rabbitmq.daemon = True
rabbitmq.start()
# Put the tasks into the queue as a tuple
for message in messages:
queue1.put(message)
# Causes the main thread to wait for the queue to finish processing all the tasks
queue1.join()
But this program always produce the following output without being sending any messages:
E:\rabbitmq\venv\Scripts\python.exe E:/rabbitmq/work_queues/new_task.py
__enter__
__exit__
__enter__
__exit__ Inside run method...Inside run method... Going to call publish method... Going to call publish method...
Message value:Hello world11Message value:Hello world10 Inside publish method...
Inside publish method... Exception in thread Exception in thread Thread-3: Traceback (most recent call last): File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner Thread-1: Traceback (most recent call last): File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run() File "E:/rabbitmq/work_queues/new_task.py", line 79, in run self.run() File "E:/rabbitmq/work_queues/new_task.py", line 79, in run
self.publish(message=message) File "E:/rabbitmq/work_queues/new_task.py", line 67, in publish
self.publish(message=message) File "E:/rabbitmq/work_queues/new_task.py", line 67, in publish
self._channel.basic_publish(exchange=self.rabbit_mq_server.exchange, File "E:\rabbitmq\venv\lib\site-packages\pika\adapters\blocking_connection.py", line 2242, in basic_publish
self._channel.basic_publish(exchange=self.rabbit_mq_server.exchange, File "E:\rabbitmq\venv\lib\site-packages\pika\adapters\blocking_connection.py", line 2242, in basic_publish
self._impl.basic_publish( self._impl.basic_publish( File "E:\rabbitmq\venv\lib\site-packages\pika\channel.py", line 421, in basic_publish
File "E:\rabbitmq\venv\lib\site-packages\pika\channel.py", line 421, in basic_publish
self._raise_if_not_open() File "E:\rabbitmq\venv\lib\site-packages\pika\channel.py", line 1389, in
_raise_if_not_open
self._raise_if_not_open() File "E:\rabbitmq\venv\lib\site-packages\pika\channel.py", line 1389, in
_raise_if_not_open
raise exceptions.ChannelWrongStateError('Channel is closed.') pika.exceptions. raise exceptions.ChannelWrongStateError('Channel is closed.') ChannelWrongStateError: Channel is closed.pika.exceptions.ChannelWrongStateError: Channel is closed.
Is it possible to send multiple message in parallel?
Is it possible to consume all those messages at one shot?
Queues are independent of each other, the same for individual messages inside each queue. You can control which consumer is subscribed to which queue, though, but that's it.
If the messages are sent and consumed in parallel, why not create a big message with all the payloads?

Keyboard Interrupting an asyncio.run() raises CancelledError and keeps running the code

I have inspected this SO question on how to gracefully close out the asyncio process. Although, when I perform it on my code:
async def ob_main(product_id: str, freq: int) -> None:
assert freq >= 1, f'The minimum frequency is 1s. Adjust your value: {freq}.'
save_loc = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data', 'ob', product_id)
while True:
close = False
try:
full_save_path = create_save_location(save_loc)
file = open(full_save_path, 'a', encoding='utf-8')
await ob_collector(product_id, file)
await asyncio.sleep(freq)
except KeyboardInterrupt:
close = True
task.cancel()
loop.run_forever()
task.exception()
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
error_msg = repr(traceback.format_exception(exc_type, exc_value, exc_traceback))
print(error_msg)
logger.warning(f'[1]-Error encountered collecting ob data: {error_msg}')
finally:
if close:
loop.close()
cwow()
exit(0)
I get the following traceback printed in terminal:
^C['Traceback (most recent call last):\n', ' File "/anaconda3/lib/python3.7/asyncio/runners.py", line 43, in run\n return loop.run_until_complete(main)\n', ' File "/anaconda3/lib/python3.7/asyncio/base_events.py", line 555, in run_until_complete\n self.run_forever()\n', ' File "/anaconda3/lib/python3.7/asyncio/base_events.py", line 523, in run_forever\n self._run_once()\n', ' File "/anaconda3/lib/python3.7/asyncio/base_events.py", line 1722, in _run_once\n event_list = self._selector.select(timeout)\n', ' File "/anaconda3/lib/python3.7/selectors.py", line 558, in select\n kev_list = self._selector.control(None, max_ev, timeout)\n', 'KeyboardInterrupt\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' File "coinbase-collector.py", line 98, in ob_main\n await asyncio.sleep(freq)\n', ' File "/anaconda3/lib/python3.7/asyncio/tasks.py", line 564, in sleep\n return await future\n', 'concurrent.futures._base.CancelledError\n']
and the code keeps running.
task and loop are the variables from the global scope, defined in the __main__:
loop = asyncio.get_event_loop()
task = asyncio.run(ob_main(args.p, 10))
Applying this question's method solves the issue. So in the above case:
try:
loop.run_until_complete(ob_main(args.p, 10))
except KeyboardInterrupt:
cwow()
exit(0)
However, I do not uderstand why that works.

How to gracefully timeout with asyncio

So before adding try/catch block my event loop closed gracefully when process ran for less than 5 minutes, but after adding try/catch block I started getting this error when the process exceeded 5 minutes
async def run_check(shell_command):
p = await asyncio.create_subprocess_shell(shell_command,
stdin=PIPE, stdout=PIPE, stderr=STDOUT)
fut = p.communicate()
try:
pcap_run = await asyncio.wait_for(fut, timeout=5)
except asyncio.TimeoutError:
pass
def get_coros():
for pcap_loc in print_dir_cointent():
for pcap_check in get_pcap_executables():
tmp_coro = (run_check('{args}'
.format(e=sys.executable, args=args)))
if tmp_coro != False:
coros.append(tmp_coro)
return coros
async def main(self):
p_coros = get_coros()
for f in asyncio.as_completed(p_coros):
res = await f
loop = asyncio.get_event_loop()
loop.run_until_complete(get_coros())
loop.close()
Traceback:
Exception ignored in: <bound method BaseSubprocessTransport.__del__ of
<_UnixSubprocessTransport closed pid=171106 running stdin=
<_UnixWritePipeTransport closing fd=8 open> stdout=<_UnixReadPipeTransport fd=9 open>>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/base_subprocess.py", line 126, in __del__
File "/usr/lib/python3.5/asyncio/base_subprocess.py", line 101, in close
File "/usr/lib/python3.5/asyncio/unix_events.py", line 568, in close
File "/usr/lib/python3.5/asyncio/unix_events.py", line 560, in write_eof
File "/usr/lib/python3.5/asyncio/base_events.py", line 497, in call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 506, in _call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 334, in _check_closed
RuntimeError: Event loop is closed
The traceback occurs after the last line in my code is executed.
Debug logs:
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:run shell command '/local/p_check w_1.pcap --json' stdin=<pipe> stdout=stderr=<pipe>
DEBUG:asyncio:process '/local/p_check w_1.pcap --json' created: pid 171289DEBUG:asyncio:Write pipe 8 connected: (<_UnixWritePipeTransport fd=8 idle bufsize=0>, <WriteSubprocessPipeProto fd=0 pipe=<_UnixWritePipeTransport fd=8 idle bufsize=0>>)
DEBUG:asyncio:Read pipe 9 connected: (<_UnixReadPipeTransport fd=9 polling>, <ReadSubprocessPipeProto fd=1 pipe=<_UnixReadPipeTransport fd=9 polling>>) INFO:asyncio:run shell command '/local/p_check w_1.pcap --json': <_UnixSubprocessTransport pid=171289 running stdin=<_UnixWritePipeTransport fd=8 idle bufsize=0> stdout=<_UnixReadPipeTransport fd=9 polling>>
DEBUG:asyncio:<Process 171289> communicate: read stdout
INFO:asyncio:poll 4997.268 ms took 5003.093 ms: timeout
DEBUG:asyncio:Close <_UnixSelectorEventLoop running=False closed=False debug=True>
loop.run_until_complete accepts something awaitable: coroutine or future. You pass result of function that returns nothing.
You should change get_coros() to actually return list of coros:
def get_coros():
...
return coros
And cast that list to awaitable that executes jobs one-by-one (or parallely if you want). For example:
async def main():
for coro in get_coros():
await coro
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
Upd:
I can't test my guess right now, but here it is: while asyncio.wait_for(fut, timeout=5) cancels task after 5 seconds, it doesn't terminate the process. You could do that manually:
try:
await asyncio.wait_for(fut, timeout=5)
except asyncio.TimeoutError:
p.kill()
await p.communicate()

Different behaviour when using context class and contextmananger method decorator in Python

I get different behaviour in the following two scenarios when trying to use Python (3.5)'s context manager. I'm trying to handle KeyboardInterrupt exceptions to my threaded program gracefully by using a proper shutdown procedure in combination with a context manager, but it seems like I can't get this to work in the second case and I can't see why not.
Common to both cases is a generic "job" task that uses threading:
import threading
class Job(threading.Thread):
def run(self):
self.active = True
while self.active:
continue
def stop(self):
self.active = False
Once started with start (a method provided by the threading.Thread parent class, which calls run internally), it can be stopped by calling stop.
The first way I tried to do this was to use the built-in __enter__ and __exit__ methods so as to take advantage of Python's with statement support:
class Context(object):
def __init__(self):
self.active = False
def __enter__(self):
print("Entering context")
self.job = Job()
self.job.start()
return self.job
def __exit__(self, exc_type, exc_value, traceback):
print("Exiting context")
self.job.stop()
self.job.join()
print("Job stopped")
I run it using the following code:
with Context():
while input() != "stop":
continue
This waits until the user types "stop" and presses enter. If during this loop the user instead presses Ctrl+C to create a KeyboardInterrupt, the __exit__ method is still called:
Entering context
^CExiting context
Job stopped
Traceback (most recent call last):
File "tmp2.py", line 48, in <module>
while input() != "stop":
KeyboardInterrupt
The second way I tried to do this was to create a function using the #contextmanager decorator:
from contextlib import contextmanager
#contextmanager
def job_context():
print("Entering context")
job = Job()
job.start()
yield job
print("Exiting context")
job.stop()
job.join()
print("Job stopped")
I again run it using the with statement:
with job_context():
while input() != "stop":
continue
But when I run it, and press Ctrl+C, the code after the yield - the equivalent of the __exit__ method in the first example, is not executed. Instead, the Python script continues to run in the infinite loop. To stop the program I have to press Ctrl+C a second time, at which point the code after yield is not executed:
Entering context
^CTraceback (most recent call last):
File "tmp2.py", line 42, in <module>
while input() != "stop":
KeyboardInterrupt
^CException ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown
t.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
You can see the ^C symbols where I pressed Ctrl+C to create the interrupts. What's different about the second case that it doesn't perform the shutdown code equivalent to __exit__ in the first case?
Per the documentation:
If an unhandled exception occurs in the block, it is reraised inside
the generator at the point where the yield occurred. Thus, you can use
a try...except...finally statement to trap the error (if any), or
ensure that some cleanup takes place.
In your case, this would look like:
#contextmanager
def job_context():
print("Entering context")
job = Job()
job.start()
try:
yield job
finally:
print("Exiting context")
job.stop()
job.join()
print("Job stopped")

Resources