I am running a PYSTRAY icon which starts a console program. When I try stopping it with this code
def stop_rtd():
print("Stopping RTD")
sys.exit()
icon.stop()
The system throws back this error
An error occurred when calling message handler
Traceback (most recent call last):
File "C:\Users\ramac\AppData\Local\Programs\Python\Python37\lib\site-packages\pystray\_win32.py", line 389, in _dispatcher
uMsg, lambda w, l: 0)(wParam, lParam) or 0)
File "C:\Users\ramac\AppData\Local\Programs\Python\Python37\lib\site-packages\pystray\_win32.py", line 209, in _on_notify
descriptors[index - 1](self)
File "C:\Users\ramac\AppData\Local\Programs\Python\Python37\lib\site-packages\pystray\_base.py", line 267, in inner
callback(self)
File "C:\Users\ramac\AppData\Local\Programs\Python\Python37\lib\site-packages\pystray\_base.py", line 368, in __call__
return self._action(icon, self)
File "C:\Users\ramac\AppData\Local\Programs\Python\Python37\lib\site-packages\pystray\_base.py", line 463, in wrapper0
return action()
File "C:/Tej/GitHub/Python/YahooDLs/try_systray.py", line 16, in stop_rtd
sys.exit()
SystemExit
RTD is the console program which is perpetual. It stops but the icon continues and console does not close. On closing the console the program exits and the icon is closed.
I am running this on Windows10
Please help me solve this problem.
I found an alternative way to exit the program. The os module also has a exit function called _exit. It takes one argument that is the exit code/status code. Usually it should be 0. So your function will look like this:
import os
def stop_rtd():
icon.stop()
os._exit(0)
Also note that you have to stop icon before exit the program.
Related
I'm currently creating a discord bot that contains two task loops called check_members and check_music.
When a user enters the offline command, I'd like to gracefully stop these loops. I wrote this piece of code in my Cog class:
class MusicBot(commands.Cog):
# function called when bot is closing.
See [here](https://discordpy.readthedocs.io/en/stable/ext/commands/api.html?highlight=cog_unload#discord.ext.commands.Cog.cog_unload)
def cog_unload(self):
print("Debug")
self.check_members.cancel()
self.check_music.cancel()
print(self.check_members.is_running())
print(self.check_music.is_running())
# example of a task loop I have:
#tasks.loop(seconds=5)
async def check_members(self):
[code...]
In another script, I call the bot.close() function as follows:
await ctx.send("Going offline! See ya later.")
if self.voice is not None:
await self.disconnect()
await self.bot.close()
sys.exit(0)
When a user calls the offline command, that's what the bot prints out:
Debug
True
Task exception was never retrieved
future: <Task finished name='discord.py: on_message' coro=<Client._run_event() done, defined at /home/liuk23/.local/lib/python3.10/site-packages/discord/client.py:401> exception=SystemExit(0)>
Traceback (most recent call last):
File "/home/liuk23/Desktop/coding/Discord-bot-3/main.py", line 67, in <module>
loop.run_forever()
File "/usr/lib/python3.10/asyncio/base_events.py", line 600, in run_forever
self._run_once()
File "/usr/lib/python3.10/asyncio/base_events.py", line 1896, in _run_once
handle._run()
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/client.py", line 409, in _run_event
await coro(*args, **kwargs)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1392, in on_message
await self.process_commands(message)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1389, in process_commands
await self.invoke(ctx) # type: ignore
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1347, in invoke
await ctx.command.invoke(ctx)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/core.py", line 986, in invoke
await injected(*ctx.args, **ctx.kwargs) # type: ignore
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/core.py", line 190, in wrapped
ret = await coro(*args, **kwargs)
File "/home/liuk23/Desktop/coding/Discord-bot-3/music.py", line 223, in offline
await self.functions.offline(ctx)
File "/home/liuk23/Desktop/coding/Discord-bot-3/funcitons.py", line 241, in offline
sys.exit(0)
SystemExit: 0
As you can notice, the Debug text gets printed out, so the cog_unload function get successfully called.
Although I am closing the loops, I get the Task was never retrieved error. Am I misunderstanding the error?
From sys.exit documentation:
Raise a SystemExit exception, signaling an intention to exit the interpreter.
What is happening is that the offline task, by throwing SystemExit, stops, but since no other task is awaiting on it, that exception is never retrieved.
The underlying problem is that, to quit the application, rather than throwing through sys.exit, it would be better to stop the running loop cleanly. For example, if the loop was run with loop.run_until_complete(some_future), it's necessary to set that future with some_future.set_result(some_result).
I fixed it by stopping the main loop that holds all the bot.
In my main script I instantiate the bot in the following manner:
try:
# loop runs until stop is called
loop.run_forever()
except KeyboardInterrupt:
pass
# finally block is called when loop is being stopped
finally:
# stop and close loop
loop.stop()
sys.exit(0)
By calling loop.stop() in another script, the finally block in main.py will be called and will successfully close the loop and the whole script.
Thanks to #Ulisse Bordignon for your answer
This snippet of code (a minimal server running in a thread, code taken from here) works fine with Python3.8.3 but raises an error message with Python3.9.0:
import asyncio
import threading
from aiohttp import web
def aiohttp_server():
def say_hello(request):
return web.Response(text='Hello, world')
app = web.Application()
app.add_routes([web.get('/', say_hello)])
runner = web.AppRunner(app)
return runner
def run_server(runner):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(runner.setup())
site = web.TCPSite(runner, 'localhost', 8080)
loop.run_until_complete(site.start())
loop.run_forever()
t = threading.Thread(target=run_server, args=(aiohttp_server(),))
t.start()
The error message:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/home/alkhinoos/nikw/nikw/z2.py", line 21, in run_server
loop.run_until_complete(site.start())
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/usr/lib/python3.9/site-packages/aiohttp/web_runner.py", line 121, in start
self._server = await loop.create_server(
File "/usr/lib/python3.9/asyncio/base_events.py", line 1460, in create_server
infos = await tasks.gather(*fs, loop=self)
File "/usr/lib/python3.9/asyncio/base_events.py", line 1400, in _create_server_getaddrinfo
infos = await self._ensure_resolved((host, port), family=family,
File "/usr/lib/python3.9/asyncio/base_events.py", line 1396, in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
File "/usr/lib/python3.9/asyncio/base_events.py", line 856, in getaddrinfo
return await self.run_in_executor(
File "/usr/lib/python3.9/asyncio/base_events.py", line 809, in run_in_executor
executor = concurrent.futures.ThreadPoolExecutor(
File "/usr/lib/python3.9/concurrent/futures/__init__.py", line 49, in __getattr__
from .thread import ThreadPoolExecutor as te
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 37, in <module>
threading._register_atexit(_python_exit)
File "/usr/lib/python3.9/threading.py", line 1374, in _register_atexit
raise RuntimeError("can't register atexit after shutdown")
RuntimeError: can't register atexit after shutdown
What's going on ? The same problem appears with Python 3.9.1. Is this problem solved with Python 3.9.2 ? Maybe a relative issue here.
As mentioned in Python manual - Thread Objects
Other threads can call a thread’s join() method. This blocks the calling thread until the thread whose join() method is called is terminated.
After calling t.start() in main thread, the main thread will exit. Then the process is ended.
If you want to run the child thread forever or until it exits, you should call t.join() in main thread after t.start().
Not sure what you did, but I used 127.0.0.1 instead of localhost and the error is resolved!
I managed to get this sorted by importing the ThreadPoolExecutor module, before calling any of my main application code - so at the top of my main.py start-up script.
thread_pool_ref = concurrent.futures.ThreadPoolExecutor
Just the act of importing the module early (before any threads are initialised) was enough to fix the error. There is an on-going issue around how this module must be present in the main thread, before any child threads import or use any related threading library code.
The inspiration for this fix came from this post on the Python bugs site. My issue was specifically around boto3 library, but the fix is applicable across the board.
I tried lot of different way but i can't figure out this problem..i am not expert in python can any one explain how can i solve this problem...plzzz
command = "netsh wlan show profile"
ssid = subprocess.check_output(command, shell=True)
ssid = ssid.decode("utf-8")
ssid_list = re.findall('(?:Profile\s*:\s)(.*)', ssid)
for ssid_name in ssid_list:
subprocess.check_output(["netsh","wlan","show","profile",ssid_name,"key=clear"])
and it gives error:
Traceback (most recent call last):
File "C:/Users/Prakash/Desktop/Hacking tool/Execute_sys_cmd_payload.py", line 18, in <module>
subprocess.check_output(["netsh","wlan","show","profile",ssid_name,"key=clear"])
File "C:\Users\Prakash\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\Prakash\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['netsh', 'wlan', 'show', 'profile', 'Prakash_WiFi\r', 'key=clear']' returned non-zero exit status 1.
Process finished with exit code 1
With check_output this exception will be raised whenever a command extis abnormally, as indicated by a non-zero return code.
A possible solution is to use subprocess.run instead. This will run the process and return a CompletedProcess instance, which has a returncode attribute that you can check;
import subprocess as sp
command = ["netsh","wlan","show",ssid_name,"key=clear"]
netsh = subprocess.run(command, shell=True, stdout=sp.PIPE)
if netsh.returncode != 0:
print(f'netsh error: {netsh.returncode}')
else:
# process netsh.stdout.
From reading this page, it seems that the command you should use is netsh wlan show (without profile).
Good day,
I am running some long-running async jobs using PubSub to trigger a function. Occasionally, the task may fail. In such cases, I simply want to log the exception, acknowledge the message, and restart the subscription to ensure that the subscriber is still pulling new messages after the failure has occurred.
I have placed some simplified code to demonstrate my current set up below:
try:
while True:
streaming_pull_future = workers.subscriber.subscribe(
subscription_path, callback=worker_task <- includes logic to ack() the message if it's failed before
)
print(f'Listening for messages on {subscription_path}')
try:
streaming_pull_future.result()
except Exception as e:
print(streaming_pull_future.cancelled()) #<-- this evaluates to false
streaming_pull_future.cancel() #<-- this results in RunTimeError: set_result can only be called once.
print(e)
except KeyboardInterrupt: # seems to be an issue as per Github PubSub issue #17. No keyboard interrupt
streaming_pull_future.cancel()
I keep seeing a RuntimeError: set_result can only be called oncewhen I execute the streaming_pull_future.cancel() in the exception handler. I checked whether perhaps the subscriber had already been cancelled but when I logged out the status it evaluated to False. Yet when I then call the cancel() method I get the error. I want to ensure that any threads are cleaned up before making a new subscription in the case where I could have several errors. Does anyone know why this is happening and a way around it?
I am running Python 3.7.4 with PubSub 1.2.0 and grpcio 1.27.1.
Update:
As per comments, please see a reproducible example. The stack trace raised is included:
Listening for messages on projects/trigger-web-app/subscriptions/load-job-sub
968432700946405
Top-level exception occurred in callback while processing a message
Traceback (most recent call last):
File "C:\..\lib\site-packages\google\cloud\pubsub_v1\subscriber\_protocol\streaming_pull_manager.py", line
71, in _wrap_callback_errors
callback(message)
File "test.py", line 19, in worker_task
a = 1/0 # cause an exception to be raised
ZeroDivisionError: division by zero
968424309156485
Top-level exception occurred in callback while processing a message
Traceback (most recent call last):
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\subscriber\_protocol\streaming_pull_manager.py", line
71, in _wrap_callback_errors
callback(message)
File "test.py", line 19, in worker_task
a = 1/0 # cause an exception to be raised
ZeroDivisionError: division by zero
Traceback (most recent call last):
File "test.py", line 29, in main
streaming_pull_future.result()
File "C:...\lib\site-packages\google\cloud\pubsub_v1\futures.py", line 105, in result
raise err
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\subscriber\_protocol\streaming_pull_manager.py", line
71, in _wrap_callback_errors
callback(message)
File "test.py", line 19, in worker_task
a = 1/0 # cause an exception to be raised
ZeroDivisionError: division by zero
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 35, in <module>
main()
File "test.py", line 31, in main
streaming_pull_future.cancel()
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\subscriber\futures.py", line 46, in cancel
return self._manager.close()
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\subscriber\_protocol\streaming_pull_manager.py", line
496, in close
callback(self, reason)
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\subscriber\futures.py", line 37, in _on_close_callback
self.set_result(True)
File "C:\...\lib\site-packages\google\cloud\pubsub_v1\futures.py", line 155, in set_result
raise RuntimeError("set_result can only be called once.")
RuntimeError: set_result can only be called once.
import os
from google.cloud import pubsub_v1
subscriber = pubsub_v1.SubscriberClient()
project_id=os.environ['GOOGLE_CLOUD_PROJECT']
subscription_name=os.environ['GOOGLE_CLOUD_PUBSUB_SUBSCRIPTION_NAME']
subscription_path = f'projects/{project_id}/subscriptions/{subscription_name}'
def worker_task( message ):
job_id = message.message_id
print(job_id)
a = 1/0 # cause an exception to be raised
message.ack()
def main():
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=worker_task
)
print(f'Listening for messages on {subscription_path}')
try:
streaming_pull_future.result()
except Exception as e:
streaming_pull_future.cancel() # if exception in callback handler, this will raise a RunTimError
print(e)
if __name__ == '__main__':
main()
Thank you.
I am trying to make a code that will ask for a user input and if that input is not given in a set amount of time will assume a default value and continue through the rest of the code without requiring the user to hit enter. I am running in Python 3.5.1 on Windows 10.
I have looked through: Keyboard input with timeout in Python, How to set time limit on raw_input, Timeout on a function call, and Python 3 Timed Input black boxing the answers but none of the answers are suitable as they are not usable on Windows (principally use of signal.SIGALRM which is only available on linux), or require a user to hit enter in order to exit the input.
Based upon the above answers however i have attempted to scrap together a solution using multiprocessing which (as i think it should work) creates one process to ask for the input and creates another process to terminate the first process after the timeout period.
import multiprocessing
from time import time,sleep
def wait(secs):
if secs == 0:
return
end = time()+secs
current = time()
while end>current:
current = time()
sleep(.1)
return
def delay_terminate_process(process,delay):
wait(delay)
process.terminate()
process.join()
def ask_input(prompt,term_queue,out_queue):
command = input(prompt)
process = term_queue.get()
process.terminate()
process.join()
out_queue.put(command)
##### this doesn't even remotly work.....
def input_with_timeout(prompt,timeout=15.0):
print(prompt)
astring = 'no input'
out_queue = multiprocessing.Queue()
term_queue = multiprocessing.Queue()
worker1 = multiprocessing.Process(target=ask_input,args=(prompt,term_queue,out_queue))
worker2 = multiprocessing.Process(target=delay_terminate_process,args=(worker1,timeout))
worker1.daemon = True
worker2.daemon = True
term_queue.put(worker2)
print('Through overhead')
if __name__ == '__main__':
print('I am in if statement')
worker2.start()
worker1.start()
astring = out_queue.get()
else:
print('I have no clue what happened that would cause this to print....')
return
print('returning')
return astring
please = input_with_timeout('Does this work?',timeout=10)
But this fails miserably and yields:
Does this work?
Through overhead
I am in if statement
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 347, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
Does this work?
Through overhead
I have no clue what happened that would cause this to print....
Does this work?Process Process-1:
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\process.py", line 287, in __reduce__
'Pickling an AuthenticationString object is '
TypeError: Pickling an AuthenticationString object is disallowed for security reasons
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda3\saved_programs\a_open_file4.py", line 20, in ask_input
command = input(prompt)
EOFError: EOF when reading a line
Does this work?
Through overhead
I have no clue what happened that would cause this to print....
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 347, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
Process Process-2:
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda3\saved_programs\a_open_file4.py", line 16, in delay_terminate_process
process.terminate()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 113, in terminate
self._popen.terminate()
AttributeError: 'NoneType' object has no attribute 'terminate'
I really don't understand the multiprocessing module well and although I have read the official docs am unsure why this error occurred or why it appears to have ran through the function call 3 times in the process. Any help on how to either resolve the error or achieve an optional user input in a cleaner manner will be much appreciated by a noob programmer. Thanks!