I've made a class in Python 3.x, that acts as a server. One method manages sending and receiving data via UDP/IP using the socket module (the data is stored in self.cmd, and self.msr respectively). I want to be able to modify the the self.msr, self.cmd variables from within the python interpreter online. For example:
>>> from myserver import MyServer
>>> s = MyServer()
>>> s.bakcground_recv_send() # runs in the background, constantly calling s.recv_msr(), s.send_cmd()
>>> process_data(s.msr) # I use the latest received data
>>> s.cmd[0] = 5 # this will be sent automatically
>>> s.msr # I can see what the newest data is
So far, s.bakcground_recv_send() does not exist. I need to manually call s.recv_msr() each time I want to see update the value of s.msr (s.recv_msr uses a blocking socket), and then call s.send_cmd() to send s.cmd.
In this particular case, which module makes more sense: multiprocess or threading?
Any hints how could I best solve this? I have no experience with either processes or threads (just read a lot, but I am still unsure which way to go).
In this case, threading makes most sense. In short, multiprocessing is for running processes on different procesors, threading is for doing things in the background.
Related
I have a Python 3.7 program that gets run by another program every 5 to 7 seconds then closes. I want to pass a single string variable from that program before it closes to a second Python program that runs in a continuous loop to keep track of that information and have it pass the information back to the first program when it starts.
I realize this could be handled by writing the information to a file then reading it, however; I am running the system off a flash memory and that would be an excessive amount of writing and rewriting considering how often the information changes.
Nothing yet, as there is a power outage and the system is down. Besides, I'm new to Python and I'm not sure the best way to handle this.
Could it be done something like this?:
Memory.py
#!/usr/bin/env python
import time
Tracking = "Home"
def Memory():
return Tracking
while 0 == 0:
from vortex import Memory
Memory = Tracking()
print(Memory)
time.sleep(1)
vortex.py
#!/usr/bin/env python
from Memory import Tracking
Tracking = Memory()
def Memory():
return Tracking
The Python program I have working so far controls a second system but the controls are limited to up, down, right, left, back and select. Options on the menu of the second system are Home, FrontCamera, FrontDoorCamera, BackdoorCamera, BackyardCamera, Weather. I want to keep track of which one it's on (first system.)
I'm writing a game in Rust where each player can submit some python scripts to the server in order to automate various tasks in the game. I plan on using pyo3 to run the python from rust.
However, I can see an issue arising if a player submits a script like this:
def on_event(e):
while True:
pass
Now when the server calls the function (using something like PyAny::call1()) the thread will hang as it reaches the infinite loop.
My first thought was to have pyo3 execute the python one statement at a time, therefore being able to exit if the script been running for over a certain threshold, but I don't think pyo3 supports this.
My next idea was to give each player their own thread to run their own scripts on, that way if one of their scripts got stuck it only affected their gameplay. However, I still have the issue of not being able to kill a thread when it gets stuck in an infinite loop - if a lot of players submitted scripts that just looped, lots of threads would start using a lot of CPU time.
All I need is way to execute python scripts in a way such that if one of them does loop, it does not affect the server's performance at all.
Thanks :)
One solution is to restrict the time that you give each user script to run.
You can do it via PyThreadState_SetAsyncExc, see here for some code. It uses C calls of the interpreter, which you probably can access in Rust (with PyO3 FFI magic).
Another way would be to do it on the OS level: if you spawn a process for the user script, and then kill it when it runs for too long. This might be more secure if you limit what a process can access (with some OS calls), but requires some boilerplate to communicate between the host.
I am designing a program in Python which
reads data via usb in two second interval from arduino to a Sqlite table (128kB each read outs)
Process the incoming data and store the results on another table
finally query the data on the table and show them on GUI created by tkinter and send the same data on the network to a server.
The question is for which part should I use multiprocessing or threading? Do I even need them? If I run first part from a separate Python file on background does it use necessarily different cpu core?
EDIT:
I found about pickling, now the question is:
is it good idea to pickle a 1kb string every 3 seconds, of course in the ramdrive? and depickle in another script.
I tested already two script and it is working, but I am not sure if this solution can be used in long term running?
It looks promising! specially when I dont see my selft stuck in multithreading or multiprocessing modules and seems like OS will assign necessary cores and threads.
Here I create a producer-customer program,the parent process(producer) create many child process(consumer),then parent process read file and pass data to child process.
but , here comes a performance problem,pass message between process cost too much time (I think).
for an example ,a 200MB original data ,parent process read and pretreat will cost less then 8 seconds , than just pass data to child process by multiprocess.pipe will cost another 8 seconds , and child processes do the remain work just cost another 3 ~ 4 seconds.
so ,a complete work flow cost less than 18 seconds ,and more than 40% time cost on communication between process , it is much bigger than I used think about ,and I tried multiprocess.Queue and Manager ,they are worse.
I works with windows7 / Python3.4.
I had google for several days , and POSH maybe a good solution , but it can't build with python3.4
there I have 3 ways:
1.is there any way can share python object direct between process in Python3.4 ? as POSH
or
2.is it possable pass the "pointer" of an object to child process and child process can recovery the "pointer" to python object?
or
3.multiprocess.Array may be a valid solution , but if I want share complex data structure, such as list, how it works? should I make a new class base on it and provide interfaces as list?
Edit1:
I tried the 3rd way,but it works worse.
I defined those value:
p_pos = multiprocessing.Value('i') #producer write position
c_pos = multiprocessing.Value('i') #customer read position
databuff = multiprocess.Array('c',buff_len) # shared buffer
and two function:
send_data(msg)
get_data()
in send_data function(parent process),it copies msg to databuff , and send the start and end position (two integer)to child process via pipe.
than in get_data function (child process) ,it received the two position and copy the msg from databuff.
in final,it cost twice than just use pipe #_#
Edit 2:
Yes , I tried Cython ,and the result looks good.
I just changed my python script's suffix to .pyx and compile it ,and the program speed up for 15%.
No doubt , I met the " Unable to find vcvarsall.bat" and " The system cannot find the file specified" error , and I cost whole day for solved the first one , and blocked by the second one.
Finally , I found Cyther , and all troubles gone ^_^.
I was at your place five month ago. I looked around few times but my conclusion is multiprocessing with Python has exactly the problem you describe :
Pipes and Queue are good but not for big objects from my experience
Manager() proxies objects are slow except arrays and those one are limited. if you want to share a complex data structure use a Namespace like it is done here : multiprocessing in python - sharing large object (e.g. pandas dataframe) between multiple processes
Manager() has a shared list you are looking for : https://docs.python.org/3.6/library/multiprocessing.html
There are no pointers or real memory management in Python, so you can't share selected memory cells
I solved this kind of problem by learning C++, but it's probably not what you want to read...
To pass data (especially big numpy arrays) to a child process, I think mpi4py can be very efficient since I can work directly on buffer-like objects.
An example of using mpi4py to spawn processes and communicate (using also trio, but it is another story) can be found here.
I have written a nice parallel job processor that accepts jobs (functions, their arguments, timeout information etc.) and submits then to a Python multiprocessing pool. I can provide the full (long) code if requested, but the key step (as I see it) is the asynchronous application to the pool:
job.resultGetter = self.pool.apply_async(
func = job.workFunction,
kwds = job.workFunctionKeywordArguments
)
I am trying to use this parallel job processor with a large body of legacy code and, perhaps naturally, have run into pickling problems:
PicklingError: Can’t pickle <type ’instancemethod’>: attribute lookup builtin .instancemethod failed
This type of problem is observable when I try to submit a problematic object as an argument for a work function. The real problem is that this is legacy code and I am advised that I can make only very minor changes to it. So... is there some clever trick or simple modification I can make somewhere that could allow my parallel job processor code to cope with these traditionally unpicklable objects? I have total control over the parallel job processor code, so I am open to, say, wrapping every submitted function in another function. For the legacy code, I should be able to add the occasional small method to objects, but that's about it. Is there some clever approach to this type of problem?
use dill and pathos.multiprocessing instead of pickle and multiprocessing.
see here:
What can multiprocessing and dill do together?
http://matthewrocklin.com/blog/work/2013/12/05/Parallelism-and-Serialization/
How to pickle functions/classes defined in __main__ (python)