Python Multiprocessing Pipe hang - python-3.x

i'm trying to build a program to send a string to process Tangki and Tangki2 then send a bit of array data each to process Outdata, but it seems not working correctly. but when i disable gate to the Outdata everything works flawlessly.
this is the example code:
import os
from multiprocessing import Process, Pipe
from time import sleep
import cv2
def outdata(input1,input2):
while(1):
room=input1.recv()
room2=input2.recv()
def tangki(keran1,selang1): ##============tangki1
a=None
x,y,degree,tinggi=0,0,0,0
dout=[]
while(1):
frame=keran1.recv()
dout.append([x,y,degree,tinggi])
selang1.send(dout)
print ("received from: {}".format(frame))
def tangki2(keran3,selang2): ##=================tangki2
x,y,degree,tinggi=0,0,0,0
dout2=[]
while(1):
frame=keran3.recv()
dout2.append([x,y,degree,tinggi])
selang2.send(dout2)
print("received from: {}".format(frame))
def pompa(gate1,gate2):
count=0
while(1):
count+=1
gate1.send("gate 1, val{}".format(count))
gate2.send("gate 2, val{}".format(count))
if __name__ == '__main__':
pipa1, pipa2 = Pipe()
pipa3, pipa4 = Pipe()
tx1,rx1 = Pipe()
tx2,rx2 = Pipe()
ptangki = Process(target=tangki, args=(pipa2, tx1))
ptangki2 = Process (target=tangki2, args=(pipa4, tx2))
ppompa = Process(target=pompa, args=(pipa1,pipa3))
keran = Process(target=outdata, args=(rx1,rx2))
ptangki.start()
ptangki2.start()
ppompa.start()
keran.start()
ptangki.join()
ptangki2.join()
ppompa.join()
keran.join()
at exact count reach 108 the process hang, not responding whatsoever. when i TOP it, the python3 process has gone, it seems that selang1 and selang2 causing the problem. i've search in google and it might be a Pipe Deadlock. so the question is how to prevent this from happening since i've already dump all data in pipe via repeated reading both in input1 and input2.
Edit: it seems that the only problem was the communication between tangki and tangki2 to outdata

it's actually because buffer size limit? but adding dout=[x,y,degree,tinggi] and dout=[x,y,degree,tinggi] reset the size of data to minimal, or by assigning dout=[0,0,0,0] and dout2=[0,0,0,0] right after selang1.send(dout) and selang2.send(dout2)

Related

queue.get(block=True) while start_background_task (flask-socketio) is running and doing queue.put()

I have an issue related to queue using a background task that never ends (continuously run to grab real-time data.
What I want to achieve:
Starting server via flask-socketio (eventlet),
monkey_patch(),
Using start_background_task, run a function from another file that grabs data in real time,
While this background task is running (indefinitely), storing incoming data in a queue via queue.put(),
Always while this task is running, from the main program watching for new data in the queue and processing them, meaning here socketio.emit().
What works: my program works well if, in the background task file, the while loop ends (while count < 100: for instance). In this case, I can access the queue from the main file and emit data.
What doesn't work: if this while loop is now while True:, the program blocks somewhere, I can't access the queue from the main program as it seems that it waits until the background task returns or stops.
So I guess I'm missing something here... so if you guys can help me with that, or give me some clues, that would be awesome.
Here some relevant parts of the code:
main.py
from threading import Thread
from threading import Lock
from queue import Queue
from get_raw_program import get_raw_data
from flask import Flask, send_from_directory, Response, jsonify, request, abort
from flask_socketio import SocketIO
import eventlet
eventlet.patcher.monkey_patch(select=True, socket=True)
app = Flask(__name__, static_folder=static_folder, static_url_path='')
app.config['SECRET_KEY'] = 'secret_key'
socketio = SocketIO(app, binary=True, async_mode="eventlet", logger=True, engineio_logger=True)
thread = None
thread_lock = Lock()
data_queue = Queue()
[...]
#socketio.on('WebSocket_On')
def grab_raw_data(test):
global thread
with thread_lock:
if thread is None:
socketio.emit('raw_data', {'msg': 'Thread is None:'})
socketio.emit('raw_data', {'msg': 'Starting Thread... '})
thread = socketio.start_background_task(target=get_raw_data(data_queue, test['mode']))
while True:
if not data_queue.empty():
data = data_queue.get(block=True, timeout=0.05)
socketio.emit('raw_data', {'msg': data})
socketio.sleep(0.0001)
get_raw_program.py (which works, can access queue from main.py)
def get_raw_data(data_queue, test):
count = 0
while count < 100:
data.put(b'\xe5\xce\x04\x00\xfe\xd2\x04\x00')
time.sleep(0.001)
count += 1
get_raw_program.py (which DOESN'T work, can't access queue from main.py)
def get_raw_data(data_queue, test):
count = 0
while True:
data.put(b'\xe5\xce\x04\x00\xfe\xd2\x04\x00')
time.sleep(0.001)
count += 1
I tried with regular Thread instead of start_background_task, and it works well. Thanks again for your help, greatly appreciated :-)

Queue and thread from file customize working threads

I am planing to write a python script that reads urls from a file and checks the status code from these urls using requests. To speed up the process my intention is to use multiple threads at the same time.
import threading
import queue
q = queue.Queue()
def CheckUrl():
while True:
project = q.get()
#Do the URL checking here
q.task_done()
threading.Thread(target=CheckUrl, daemon=True).start()
file = open("TextFile.txt", "r")
while True:
next_line = file.readline()
q.put(next_line)
if not next_line:
break;
file.close()
print('project requests sent\n', end='')
q.join()
print('projects completed')
My problem. Now the code is reading all the text at once making as many threads as there are lines in the text file if I understand correctly. I i would like to do something like read 20 lines at the same time, check status code from the 20 urls, if one or more checks are done go to the next.
is there something like
threading.Thread(target=CheckUrl, daemon=True, THREADSATSAMETIME=20).start()
Seems i have to stick with this one
def threads_run():
for i in range(20): #create 20 threads
(i) = threading.Thread(target=CheckUrl, daemon=True).start()
threads_run()

Python3 ZMQ, Interrupt function and calling another on each new message received

Here is my problem : I have 2 programs communicating thanks to zmq on an arbitrary tcp port.
When the #1 receives message from #2 he has to call some function.
If #1 receives a message before the current function ends, I'd like #1 to interrupt the current function and call the new one.
I tried to use threading.Event to interrupt function.
I don't know if zmq is the right option for my needs or if the socket types fine.
To simplify I show the simplest version possible,here is what I tried :
p1.py
import zmq
from threading import Event
port_p2 = "6655"
context = zmq.Context()
socket = context.socket(zmq.PAIR)
socket.connect("tcp://localhost:%s" % port_p2)
print("port 6655")
__exit1 = Event()
__exit2 = Event()
def action1():
__exit1.clear()
__exit2.set()
while not __exit1.is_set():
for i in range(1, 20):
print(i)
time.sleep(1)
__exit1.set()
def action2():
__exit2.clear()
__exit1.set()
while not __exit2.is_set():
for i in range(1, 20):
print(i * 100)
time.sleep(1)
__exit2.set()
if __name__ == "__main__":
try:
while True:
try:
string = socket.recv(flags=zmq.NOBLOCK)
# message received, process it
string = str(string, 'utf-8')
if "Action1" in string:
action1()
if "Action2" in string:
action2()
except zmq.Again as e:
# No messages waiting to be processed
pass
time.sleep(0.1)
except(KeyboardInterrupt, SystemExit):
print("exit")
and p2.py
import time
import random
port_p1 = "6655"
context = zmq.Context()
socket_p1 = context.socket(zmq.PAIR)
socket_p1.bind("tcp://*:%s" % port_p1)
print("port 6655")
if __name__ == "__main__":
while True:
i = random.choice(range(1, 10))
print(i)
try:
if random.choice([True, False]):
print("Action 1")
socket_p1.send(b'Action1')
else:
socket_p1.send(b'Action2')
print("Action 2")
except zmq.Again as e:
pass
time.sleep(i)
For my purpose I didn't want / can't use system signals
I'd appreciate any input and don't hesitate to ask for precision, I have to confess that I had trouble writing this down.
Thank you
Q : …like #1 to interrupt the current function…
Given you have forbidden to use signals, #1 can but passively signal (be it over the present ZeroMQ infrastructure or not) the function, not to continue further and return in a pre-mature fashion ( so the fun() has to get suitably modified for doing that active re-checking, best in some reasonably granular progressive fashion, regularly checking actively the #1 if did passively signal ( "tell" the fun() ) to RET early, due to whatever reason and way the #1 had and used to do that ).
The other chance is to extend the already present ZeroMQ infrastructure ( the Context()-instance(s) ) with a socket-monitor artillery and make the fun() .connect()-directly to the socket-monitor resources to actively learn about any new message arriving to #1 ( i.e. autonomously, without #1's initiative ) and deciding to return in a pre-mature fashion, in those cases, where feasible according to your application logic.
For the socket-monitor case, the API documentation has all details needed for implementation, which would otherwise go way beyond the scope of the Stack Overflow post.

Thread Save Serial Connection in Telepot (Python)

I have a serial device (Arduino) regularly outputting log data, which shold be written in a Log file. Also the device takes spontaneous commands over serial. I send the commands to a Raspberry over Telegram, which are handled and sent to the arduino by Telepot, which runs in a separate thread.
How can I make sure that the two processes get along with each other?
I am a complete Beginner in Multithreading.
Here is a shortened version of my Code:
import time
import datetime
import telepot
import os
import serial
from time import sleep
ser = None
bot = None
def log(data):
with open('logfile', 'w') as f:
file.write("Timestamp" + data)
#The handle Function is called by the telepot thread,
#whenever a message is received from Telegram
def handle(msg):
chat_id = msg['chat']['id']
command = msg['text']
print( 'Command Received: %s' % command)
if command = '/start':
bot.sendMessage(chat_id, 'welcome')
elif command == 'close_door':
#This serial write could possibly happen while a
#ser.readline() is executed, which would crash my program.
ser.write("Close Door")
elif command == 'LOG':
#Here i should make sure that nothing
#is waiting from the Arduino
#so that the next two Serial lines are the Arduinos
#respoonce to the "LOG" command.
#and that hanlde is the only
#function talking to the Serial port now.
ser.write("LOG")
response = ser.readline()
response += "\0000000A" + ser.readline()
#The Arduinos response is now saved as one string
#and sent to the User.
bot.sendMessage(chat_id, response)
print("Command Processed.")
bot = telepot.Bot('BOT TOKEN')
bot.message_loop(handle)
ser = serial.Serial("Arduino Serial Port", 9600)
print( 'I am listening ...')
while True:
#anything to make it not run at full speed (Recommendations welcome)
#The log updates are only once an hour.
sleep(10)
#here i need to make sure it does not collide with the other thread.
while ser.in_waiting > 0:
data = ser.readline()
log(data)
This code is not my actual code, but it should represent exactly what I'm trying to do.
My last resort would be to put the serial code in the threads loop function, But this would require me to change the libary which would be ugly.
I looked up some stuff about Queues in Asincio, and locking functions. However i don't really understand how to apply that. Also I don't use the async telepot.
After reading more on locking and threads, I found an answer with help of the links provided in this Question: Locking a method in Python?
It was often recommended to use Queues, however I don't know how.
My solution (code may have errors, but the principle works)
import time
import random
import datetime
import telepot
import os
import serial
from time import sleep
#we need to import the Lock from threading
from threading import Lock
ser = None
bot = None
def log(data):
with open('logfile', 'w') as f:
file.write("Timestamp" + data)
#create a lock:
ser_lock = Lock()
#The handle Function is called by the telepot thread,
#whenever a message is received from Telegram
def handle(msg):
#let the handle function use the same lock:
global ser_lock
chat_id = msg['chat']['id']
command = msg['text']
print( 'Command Received: %s' % command)
if command == '/start':
bot.sendMessage(chat_id, 'welcome')
elif command == 'close_door':
#This serial write could possibly happen while a
#ser.readline() is executed, which would crash my program.
with ser_lock:
ser.write("Close Door")
elif command == 'LOG':
#Here i should make sure that nothing
#is waiting from the Arduino
#so that the next two Serial lines are the Arduinos
#respoonce to the "LOG" command.
#and that hanlde is the only
#function talking to the Serial port now.
#the lock will only be open when no other thread is using the port.
#This thread will wait untill it's open.
with ser_lock:
while ser.in_waiting > 0:
data = ser.readline()
log(data)
#Should there be any old data, just write it to a file
#now i can safely execute serial writes and reads.
ser.write("LOG")
response = ser.readline()
response += "\0000000A" + ser.readline()
#The Arduinos response is now saved as one string
#and sent to the User.
bot.sendMessage(chat_id, response)
print("Command Processed.")
bot = telepot.Bot('BOT TOKEN')
bot.message_loop(handle)
ser = serial.Serial("Arduino Serial Port", 9600)
print( 'I am listening ...')
while True:
#anything to make it not run at full speed (Recommendations welcome)
#The log updates are only once a
sleep(10)
#here i need to make sure it does not collide with the other thread.
with ser_lock:
while ser.in_waiting > 0:
data = ser.readline()
log(data)

Asynchronously writing to console from stdin and other sources

I try to try to write some kind of renderer for the command line that should be able to print data from stdin and from another data source using asyncio and blessed, which is an improved version of python-blessings.
Here is what I have so far:
import asyncio
from blessed import Terminal
#asyncio.coroutine
def render(term):
while True:
received = yield
if received:
print(term.bold + received + term.normal)
async def ping(renderer):
while True:
renderer.send('ping')
await asyncio.sleep(1)
async def input_reader(term, renderer):
while True:
with term.cbreak():
val = term.inkey()
if val.is_sequence:
renderer.send("got sequence: {0}.".format((str(val), val.name, val.code)))
elif val:
renderer.send("got {0}.".format(val))
async def client():
term = Terminal()
renderer = render(term)
render_task = asyncio.ensure_future(renderer)
pinger = asyncio.ensure_future(ping(renderer))
inputter = asyncio.ensure_future(input_reader(term, renderer))
done, pending = await asyncio.wait(
[pinger, inputter, renderer],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(client())
asyncio.get_event_loop().run_forever()
For learning and testing purposes there is just a dump ping that sends 'ping' each second and another routine, that should grab key inputs and also sends them to my renderer.
But ping only appears once in the command line using this code and the input_reader works as expected. When I replace input_reader with a pong similar to ping everything is fine.
This is how it looks when typing 'pong', although if it takes ten seconds to write 'pong':
$ python async_term.py
ping
got p.
got o.
got n.
got g.
It seems like blessed is not built to work correctly with asyncio: inkey() is a blocking method. This will block any other couroutine.
You can use a loop with kbhit() and await asyncio.sleep() to yield control to other coroutines - but this is not a clean asyncio solution.

Resources