I have a discord.py bot that runs on both Windows and Linux. It's a pretty normal bot that displays messages, has commands etc. But I also added reading text files in background to send them as messages in the discord later.
Apparenty I messed something up with asyncio and on Linux some people have lots of empty subprocesses that just sit there and do nothing. Bot works okay for them though.
I asked this question in the Discord API server and was told that discord.py doesn't cause this so the issue is somewhere in my code, which is quite possible since I'm not a pro in Python and getting async stuff to work cross-platform was hard.
What causes my problem and how to solve it?
import os, sys, math, re, time, threading, asyncio, signal, json, discord, psutil
from pathlib import Path
from mcrcon import MCRcon
from discord.ext import commands
# ..Lots of code..
async def update():
while runApp:
try:
# Collect data for bot to display
await attemptReading()
await asyncio.sleep(settings['update-interval'])
except asyncio.CancelledError: # Never occurs for some reason
break
print("'update' task and event loop stopped")
asyncio.get_event_loop().stop()
def ask_exit():
print("Stop tasks..")
for task in asyncio.Task.all_tasks():
task.cancel()
loop = asyncio.get_event_loop()
updateTask = loop.create_task(update())
# Run bot
print("Ctrl+C to stop the bot")
try:
loop.run_until_complete(bot.start(settings['token']))
except KeyboardInterrupt:
print("Shutting down..")
loop.run_until_complete(bot.logout())
botIsReady = False
runApp = False
print("Cancel tasks")
# cancel all tasks
try:
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
except NotImplementedError: # Windows
pass
loop.run_forever()
finally:
print("Shutted down")
loop.close()
Related
I have a code that is architecturally close to posted below (unfortunately i can't post full version cause it's proprietary). I have an self-updating executable and i'm trying to test this feature. We assume that full path to this file will be in A.some_path after executing input. My problem is that assertion failed, because on second call os.stat still returning the previous file stats (i suppose it thinks that nothing could changed so it's unnecessary). I have tried to launch this manually and self-updating works completely fine and the file is really removing and recreating with stats changing. Is there any guaranteed way to force os.stat re-read file stats by the same path, or alternative option to make it works (except recreating an A object)?
from pathlib import Path
import unittest
import os
class A:
some_path = Path()
def __init__(self, _some_path):
self.some_path = Path(_some_path)
def get_path(self):
return self.some_path
class TestKit(unittest.TestCase):
def setUp(self):
pass
def check_body(self, a):
some_path = a.get_path()
modification_time = os.stat(some_path).st_mtime
# Launching self-updating executable
self.assertTrue(modification_time < os.stat(some_path).st_mtime)
def check(self):
a = A(input('Enter the file path\n'))
self.check_body(a)
def Tests():
suite = unittest.TestSuite()
suite.addTest(TestKit('check'))
return suite
def main():
tests_suite = Tests()
unittest.TextTestRunner().run(tests_suite)
if __name__ == "__main__":
main()
I have found the origins of the problem: i've tried to launch self-updating via os.system which wait till the process done. But first: during the self-updating we launch several detached proccesses and actually should wait unitl all them have ended, and the second: even the signal that the proccess ends doesn't mean that OS really completely realease the file, and looks like on assertTrue we are not yet done with all our routines. For my task i simply used sleep, but normal solution should analyze the existing proccesses in the system and wait for them to finish, or at least there should be several attempts with awaiting.
I'm so curious about this and need some advise about how can this happen? Yesterday I've tried to implement multiprocessing in Python script which is running on Spyder in Window PC. Here is the code I've first tried.
import multiprocessing
import time
start = time.perf_counter()
def do_something():
print('Sleeping 1 second...')
time.sleep(1)
print('Done sleeping')
p1 = multiprocessing.Process(target=do_something)
p2 = multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start,2)} second(s)')
It's return an error.
AttributeError: Can't get attribute 'do_something' on <module '__main__' (built-in)
Then I search for survival from this problem and also my boss. And found this suggestion
Python's multiprocessing doesn't work in Spyder IDE
So I've followed it and installed Pycharm and try to run the code on PyCharm and it's seem to be work I didn't get AttributeError, however I got this new one instead of
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I've googled again then finally I got this
RuntimeError on windows trying python multiprocessing
what I have to do is adding this one line
if __name__ == '__main__':
before starting multiprocessing.
import multiprocessing
import time
start = time.perf_counter()
def do_something():
print('Sleeping 1 second...')
time.sleep(1)
print('Done sleeping')
if __name__ == '__main__':
p1 = multiprocessing.Process(target=do_something)
p2 = multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start,2)} second(s)')
And it's work now moreover, it's not working only on PyCharm, now I can run this code on Spyder too. So that is why I have so curious? how come Spyder also work? This is quite persist because I'm also run this code on my other PC which is Window server 2016 with Spyder , I'm also do something.
Anyone can help explain what happen here why it's work?
Thank you.
There's a lot to unpack here, so I'll just give a brief overview. There's also some missing information like how you have spyder/pycharm configured, and what operating system you use, so I'll have to make some assumptions...
Based on the error messages you are probably using MacOS or Windows which means the default way python creates a child process is called spawn. This means it will start a completely new process from the python executable ("python.exe" on windows for example). It will then send a message to the new process telling it what function to execute (target), and optionally what arguments to call that function with. The new process will have to import the main file to have access to that function however, so if you are running the python interpreter in interactive mode, there is no "main" file to import, and you get the first error message: AttributeError.
The second error is also related to the importing of the "main" file. When you import a file, it basically just runs the file like any other python script. If you were to create a new child process during import that child would then also create a new child when it imports the same file. You would end up recursively creating infinite child processes until the computer crashed, so python disallows creating additional child processes during the import phase of a child process hence the RuntimeError.
I wish to launch a python file when I send an on request and kill that python process once I send the off request . I am able to send the on and off requests but am not able to run the other python file or kill it from the program I have written.
I could make a subprocess call but I think there should be a way to call other python scripts inside a python script and also a way to kill those scripts once their purpose is fulfilled.
I suggest to use a thread.
Write all the code in your python script in a function doit (except import statements)
and then import it:
content of thescript.py:
import time
def doit(athread):
while not athread.stopped():
print("Hello World")
time.sleep(1)
your program should look like:
import threading
import time
import thescript
class FuncThread(threading.Thread):
def __init__(self, target):
self.target=target
super(FuncThread,self).__init__()
self._stop_event=threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
def run(self):
self.target(self)
t1=FuncThread(thescript.doit)
t1.start()
time.sleep(5)
t1.stop()
t1.join()
You can exit the thread any time, I just waited 5 seconds and then called the stop() method.
I am beginner in Python, ZMQ,networking or even coding in general, so please pardon my mistakes.
I am trying to send instructions to open notepad.exe from my desktop to my laptop like this:
MAIN SERVER
import zmq
import subprocess
try:
raw_input
except NameError:
raw_input = input #for python 3
#This will handle all the sockets we will open on the mainServer
context = zmq.Context()
#Socket 1 - Using for instructions
instructions = context.socket(zmq.PUSH)
instructions.bind("tcp://*:5555")
#Socket 2 - Using for end of finished instructions
#doneWork = context.socket(zmq.PULL)
#instructions.bind("tcp://*:5556")
#Now we will press enter when the workers are ready
print("Press Enter when you want to send the instructions. Make sure test devices are ready")
_=raw_input()
print ("Sending tasks to test device. . .")
instruction_One= "subprocess.call(['C:\notepad.exe'])"
instructions.send_string('%s' %instruction_One)
and
CLIENT
import zmq
import sys
context = zmq.Context()
instructions = context.socket(zmq.PULL)
instructions.connect("tcp://192.168.0.12:5555")
while True:
instruction_One=instructions.recv()
string_of_instruction = instruction_One.decode("utf-8")
sys.std.out.write(string_of_instruction)
sys.std.out.flush()
I am sending the instructions in terms of string which encoded into binary through the socket. But on the client side (laptop), whatever I am fetching cannot be executed through command line. What is the stupid mistake I am making?
I found out the fix.
instead of sys, I have used subprocess.
subprocess(command, shell=True)
Thanks
import multiprocessing as mp
import time
"""
1. send item via pipe
2. Receive on the other end by a generator
3. if the pipe is closed on the sending side, retrieve
all item left and then quit.
"""
def foo(conn):
for i in range(7):
time.sleep(.3)
conn.send(i)
conn.close()
def bar(conn):
while True:
try:
yield conn.recv()
except EOFError:
break
if __name__ == '__main__':
"""Choose which start method is used"""
recv_conn, send_conn = mp.Pipe(False)
p = mp.Process(target = foo, args = (send_conn,)) # f can only send msg.
p.start()
# send_conn.close()
for i in bar(recv_conn):
print(i)
I'm using Python 3.4.1 on Ubuntu 14.04 and the code is not working. At the end of the program, there is no EOFError, which should terminates the code, although the Pipe has been closed. Closing the Pipe inside a function does not close the Pipe. Why is this the case?
Uncomment your send_conn.close() line. You should be closing pipe ends in processes that don't need them. The issue is that once you launch the subprocess, the kernel is tracking two open references to the send connection of the pipe: one in the parent process, one in your subprocess.
The send connection object is only being closed in your subprocess, leaving it open in the parent process, so your conn.recv() call won't raise EOFError. The pipe is still open.
This answer may be useful to you as well.
I verified that this code works in Python 2.7.6 if you uncomment the send_conn.close() call.