Multithreaded approach to gRPC broadcasts message to all clients - python-3.x

I need to implement a bidirectional streaming in gRPC with python. What I am implementing is a bit different from the example on gRPC's tutorial. In my implementation I need to keep sending responses to the client after initial subscribe request till the client unsubscribes. I have provided a sample implementation below. The main problem I face is that for 1 client the implementation works normally, however the moment I run 2 clients parallely, the response is send to the clients alternatively. Is there any way to stop this behavior?
Server Code
class ChatServicer(route_guide_pb2_grpc.ChatServicer):
"""Provides methods that implement functionality of route guide server."""
def __init__(self):
self.db = None#route_guide_resources.read_route_guide_database()
self.prev_notes = []
self._watch_response_queue = SimpleQueue()
self._watch_response_queue2 = []
def process_request_note(self, note):
while True:
time.sleep(1.0)
self._watch_response_queue.put(note)
def RouteChat(self, request_iterator, context):
print("ROuteCHat")
temp = None
for new_note in request_iterator:
temp = new_note
for prev_note in self.prev_notes:
if prev_note.location == new_note.location:
yield prev_note
self.prev_notes.append(new_note)
process_request_thread = Thread(
target=self.process_request_note,
args=(temp,),
)
process_request_thread.start()
lock = Lock()
while True:
with lock:
while not self._watch_response_queue.empty():
yield self._watch_response_queue.get(block=True, timeout=2.0)
# while len(self._watch_response_queue2) > 0:
# yield self._watch_response_queue2.pop()
time.sleep(1.0)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
route_guide_pb2_grpc.add_ChatServicer_to_server(
ChatServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
print("server started")
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
Client Code
def generate_messages():
messages = [
make_route_note("First", 0, 0),
make_route_note("Second", 0, 1),
make_route_note("Third", 1, 0),
make_route_note("Fourth", 0, 0),
make_route_note("Fifth", 1, 0),
]
for msg in messages:
print("Sending %s at %s" % (msg.message, msg.location))
yield msg
def guide_route_chat(stub):
responses = stub.RouteChat(generate_messages())
ctr = 0
for response in responses:
ctr += 1
print(f"Received message{ctr} %s at %s" %
(response.message, response.location))
def run():
# NOTE(gRPC Python Team): .close() is possible on a channel and should be
# used in circumstances in which the with statement does not fit the needs
# of the code.
with grpc.insecure_channel('localhost:50051') as channel:
stub = route_guide_pb2_grpc.ChatStub(channel)
print("-------------- RouteChat --------------")
guide_route_chat(stub)
For the second client I use the same code as the client above.
TLDR: Grpc server is sending response alternatively to multiple clients whereas it is desired for only 1 client to receive message at a time.

Related

Running web.run_app() along with another async program : Python

Currently I am working on a project which involves usage of Asynchronous functions, due to the usage of certain set of libraries. My code runs fine as long as I don't integrate a web-socket server implementing functionality in my code.
But, I wish to stream the output 'Result' continuously in a websocket stream. So, I tried integrating websocket from socketio library as an AsyncServer.
Firstly, in my code, I want to gather all my inputs, and keep displaying the possible Result in a terminal. Once my inputs are finalized, I wish my result to be streamed over Websocket.
Initially, I just tried using web.run_app() in an asynchronous task in the main thread. Refer code below with #Type-1 comments. (Make sure that the lines with comment #Type-2 should be commented out). But I get the following exception "This event loop is already running".
I thought maybe if I run web.run_app() in a separate thread, then this issue might not come up. So, I changed my implementation slightly. Refer code below with #Type-2 comments. (Make sure that the lines with comment #Type-1 should be commented out). Now, I get another issue "set_wakeup_fd only works in main thread of the main interpreter".
Can someone please help me solve this issue, and let me know how must I use web.run_app()?
Here is the code:
import os, sys
import asyncio
import platform
import threading
import socketio
import json
from aioconsole import ainput
from aiohttp import web
from array import *
Result = -1
Inputs_Required = True
Input_arr = array('i')
sio = socketio.AsyncServer()
app = web.Application()
sio.attach(app)
Host = "192.168.0.7"
Port = 8050
async def IOBlock():
global Input_arr
global Inputs_Required
while(True):
response = input("Enter new input? (y/n): ")
if('y' == response or 'Y' == response):
Input = input("Enter number to be computed: ")
Input_arr.append(int(Input))
break
elif('n' == response or 'N' == response):
Inputs_Required = False
break
else:
print("Invalid response.")
async def main():
global Results
global Inputs_Required
global Input_arr
WebSocketStarted = False
#WebSocketThread = threading.Thread(target = WebStreaming, daemon = True) #Type-2
try:
while True:
if(Inputs_Required == True):
Task_AddInput = asyncio.create_task(IOBlock())
await Task_AddInput
elif (WebSocketStarted == False):
WebSocketStarted = True
#WebSocketThread.start() #Type-2
WebTask = asyncio.create_task(WebStreaming()) #Type-1
await WebTask #Type-1
if(len(Input_arr) > 0):
Task_PrintResult = asyncio.create_task(EvaluateResult())
await Task_PrintResult
except Exception as x:
print(x)
finally:
await Cleanup()
async def WebStreaming(): #Type-1
#def WebStreaming(): #Type-2
print("Starting web-socket streaming of sensor data..")
Web_loop = asyncio.new_event_loop #Type-1 or 2
asyncio.set_event_loop(Web_loop) #Type-1 or 2
web.run_app(app, host=Host, port=Port)
async def EvaluateResult():
global Input_arr
global Result
Result = 0
for i in range (0, len(Input_arr)):
Result += Input_arr[i]
print(f"The sum of inputs fed so far = {Result}.")
await asyncio.sleep(5)
async def Cleanup():
global Input_arr
global Inputs_Required
global Result
print("Terminating program....")
Result = -1
Inputs_Required = True
for i in reversed(range(len(Input_arr))):
del Input_arr[i]
#sio.event
async def connect(sid, environ):
print("connect ", sid)
#sio.event
async def OnClientMessageReceive(sid, data):
global Result
print("Client_message : ", data)
while True:
msg = json.dumps(Result)
print(msg)
await sio.send('OnServerMessageReceive', msg)
#sio.event
def disconnect(sid):
print('disconnect ', sid)
if __name__ == "__main__":
asyncio.run(main())

Python socket recv doesn't give good result

I am trying to build a program for my IT course. The point of the program is to have a client app to send commands to the server. It seemd to work pretty well until today where, after a few calls, when I receive a response from the server it is not up to date.
eg : I send a few commands that all work fine. But then send another command and receive the response from the previous one.
I checked the command sent by the client and it is the one I type and in the server part, when I receive a command from the client it is the one actually sent by the client (not the previous one)
Here is the Shell classes (in the server and client) that I use to send and receive messages aswell as an example on how I use it.
Server :
class Shell:
command = ""
next_command = True
def __init__(self, malware_os):
self._os = malware_os
self._response = ""
def receive(self):
self.command = distant_socket.recv(4096).decode("utf-8")
def execute_command(self):
if self.command[:2] == "cd":
os.chdir(self.command[3:])
if self._os == "Windows":
self.result = Popen("cd", shell=True, stdout=PIPE)
else:
self.result = Popen("pwd", shell=True, stdout=PIPE)
else:
self.result = Popen(self.command, shell=True, stdout=PIPE)
self._response = self.result.communicate()
def send(self):
self._response = self._response[0]
self._response = self._response.decode("utf-8", errors="ignore")
self._response = self._response + " "
self._response = self._response.encode("utf-8")
distant_socket.send(self._response)
self._response = None
Use in server :
shell.receive()
shell.execute_command()
shell.send()
Client :
class Shell:
def __init__(self):
self._history = []
self._command = ""
def send(self):
self._history.append(self._command)
s.send(self._command.encode("utf-8"))
def receive(self):
content = s.recv(4096).decode("utf-8", errors="ignore")
if content[2:] == "cd":
malware_os.chdir(self._command[3:].decode("utf-8", errors="ignore"))
print(content)
def history(self):
print("The history of your commands is:")
print("----------------------")
for element in self._history:
print(element)
def get_command(self):
return self._command
def set_command(self, command):
self._command = command
Use in client :
shell.set_command(getinfo.get_users())
shell.send()
shell.receive()
Thank you in advance for your help,
Cordially,
Sasquatch
Since you said the response is not up to date, I'm guessing you used TCP (you didn't post the socket creation). Like the comment mentioned, there are 2 things that you aren't doing right:
Protocol: TCP gives you a stream, which is divided as the OS sees fit into packets. When transferring data over the network, the receiving end must know when it has a complete transmission. The easiest way to do that would be to send the length of the transmission, in a fixed format (say 4 bytes, big endian), before the transmission itself. Also, use sendall. For example:
import struct
def send_message(sock, message_str):
message_bytes = message_str.encode("utf-8")
size_prefix = struct.pack("!I", len(message_bytes)) # I means 4 bytes integer in big endian
sock.sendall(size_prefix)
sock.sendall(message_bytes)
Since TCP is a stream socket, the receiving end might return from recv before the entire message was received. You need to call it in a loop, checking the return value at every iteration to correctly handle disconnects. Something such as:
def recv_message_str(sock):
#first, get the message size, assuming you used the send above
size_buffer = b""
while len(size_buffer) != 4:
recv_ret = sock.recv(4 - len(size_buffer))
if len(recv_ret) == 0:
# The other side disconnected, do something (raise an exception or something)
raise Exception("socket disconnected")
size_buffer += recv_ret
size = struct.unpack("!I", size_buffer)[0]
# Loop again, for the message string
message_buffer = b""
while len(message_buffer) != size:
recv_ret = sock.recv(size - len(message_buffer))
if len(recv_ret) == 0:
# The other side disconnected, do something (raise an exception or something)
raise Exception("socket disconnected")
message_buffer += recv_ret
return message_buffer.decode("utf-8", errors="ignore")

Use zmq.Poller() to add timeout for my REQ/REP zmqclient, but the function does not return anything

I want to add a timeout for my 0MQ client.
I tried zmq.Poller(). It seems to work at the beginning. But when I move code into a function, I find it doesn't return anything. It just stuck there.
I have two print lines.
First print:
I print the result zmq_Response successfully before this function returns. But when it comes to the next line, nothing returns.
Second print:
I guess that's why my last print does not work.
def send_message():
context = zmq.Context()
zmq_Socket = context.socket(zmq.REQ)
zmq_Socket.connect('tcp://localhost:5000')
zmq_Data = {'Register': 'default'}
zmq_Socket.send_string(json.dumps(zmq_Data), flags=0, encoding='utf8')
poller = zmq.Poller()
poller.register(zmq_Socket, flags=zmq.POLLIN)
if poller.poll(timeout=1000):
zmq_Response = zmq_Socket.recv_json()
else:
# raise IOError("Timeout processing auth request")
zmq_Response = {'test': 'test'}
poller.unregister(zmq_Socket)
print(zmq_Response) # **This print works!**
return zmq_Response
res = send_message()
print(res)
It is expected to print zmq_Response but it does not.
I solve it now...
It seems that when the value of zmq_LINGER is the default value, which is -1, context will wait until messages have been sent successfully before allowing termination.
So I set zmq_LINGER to 1 at timeout branch.
It works for now.
def send_message():
context = zmq.Context()
zmq_Socket = context.socket(zmq.REQ)
zmq_Socket.connect('tcp://localhost:5000')
zmq_Data = {'Register': 'default'}
zmq_Socket.send_string(json.dumps(zmq_Data), flags=0, encoding='utf8')
poller = zmq.Poller()
poller.register(zmq_Socket, flags=zmq.POLLIN)
if poller.poll(timeout=1000):
zmq_Response = zmq_Socket.recv_json()
else:
# --------------------------------------------
# I change the value of zmq.LINGER here.
zmq_Socket.setsockopt(zmq.LINGER, 1)
# --------------------------------------------
zmq_Response = {'test': 'test'}
poller.unregister(zmq_Socket)
print(zmq_Response)
return zmq_Response
res = send_message()
print(res)
I have implemented zmq.poller() for timeout functionality with zmq.REQ and zmq.REP socket to handle deadlock. Have a look at the code.
For more explanation check out my repo
Client Code
#author: nitinashu1995#gmail.com
#client.py
import zmq
context = zmq.Context()
# Socket to talk to server
print("Connecting to hello world server…")
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
socket.setsockopt(zmq.LINGER, 0)
# use poll for timeouts:
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
# Do 10 requests, waiting each time for a response
for request in range(10):
print("Sending request %s …" % request)
socket.send(b"Hello")
'''
We have set response timeout of 6 sec.
'''
if poller.poll(6*1000):
message = socket.recv()
print("Received reply %s [ %s ]" % (request, message))
else:
print("Timeout processing auth request {}".format(request))
print("Terminating socket for old request {}".format(request))
socket.close()
context.term()
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
socket.setsockopt(zmq.LINGER, 0)
poller.register(socket, zmq.POLLIN)
print("socket has been re-registered for request {}".format(request+1))
Server Code
#server.py
import time
import zmq
context = zmq.Context()
#socket = context.socket(zmq.PULL)
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
#socket.RCVTIMEO = 6000
i = 0
while True:
# Wait for next request from client
message = socket.recv()
print("Received request: %s" % message)
#Get the reply.
time.sleep(i)
i+=1
# Send reply back to client
socket.send(b"World")

multiple events in paho mqtt with python

#!/usr/bin/env python
import sys
from arduino.Arduino import Arduino
import paho.mqtt.client as mqtt
import serial
import time
pin = 13
broker_adress = "10.0.2.190"
sys.path.append("/home/hu/Schreibtisch/Arduino_BA_2.0/Probe_Programmierung/Python-Arduino-Proto-API-v2/arduino")
ser = serial.Serial('/dev/ttyACM0', 9600)
b = Arduino('/dev/ttyACM0')
b.output([pin])
b.setLow(pin)
gassensor_value = "no default_value"
sensor_value = [['/Deutschland/Osnabrueck/Coffee-bike-1/Sensor_1',gassensor_value]]
#########################################################################
# Callback_1 for relay
#on_connect1,on_disconnect1,on_subscribe1on_message_1
#########################################################################
def on_connect(mqttrelay, obj, flags, rc):
if rc != 0:
exit(rc)
else:
mqttrelay.subscribe("qos0/test", 0)
def on_disconnect(mqttrelay, obj, rc):
obj = rc
def on_subscribe(mqttrelay, obj, mid, granted_qos):
print(mqttrelay.subscribe("qos0/test", 0))
print("Waiting for the subscribed messages")
def on_message(mqttrelay,userdata, message):
a = str(message.payload.decode("utf-8"))
print(a)
if (a == "1" or a == "0"):
if (a == "1"):
b.setHigh(13)
time.sleep(10)
else:
b.setLow(13)
time.sleep(10)
else:
print("please publish the message 1 or 0")
#########################################################################
# Callback_2 for gassensor
# on_connect2,on_publish2
#########################################################################
def on_publish(mqttgassensor, obj, mid):
print("mid: " + str(mid))
def on_connect(mqttgassensor, userdata, flags, rc):
print("Connected with result code " + str(rc))
#create new instance to subscribe the sitution of relay
mqttrelay = mqtt.Client("relay_K_12", 1)
#create new instance to publish the situation of gassensor
mqttgassensor = mqtt.Cleint("gassensor",1)
#the events and callbacks of instance mqttrelais associate with each other:
mqttrelay.on_message = on_message
mqttrelay.on_connect = on_connect
mqttrelay.on_subscribe = on_subscribe
mqttrelay.on_disconnect = on_disconnect
mqttrelay.connect(broker_adress)
#the events and callbacks of instance gassensor associate with each other:
mqttgassensor.on_connect = on_connect
mqttgassensor.on_publish = on_publish
mqttgassensor.connect(broker_adress)
while True:
mqttrelay.loop_start()
time.sleeps(2)
mqttrelay.loop_stop()
print("relay 開始循環")
mqttgassensor.loop_start()
mqttgassensor.loop()
time.sleep(1)
sensor_value[0][1] = ser.readline()
if (sensor_value[0][1] != "no default_value" or sensor_value[0][1] != b''):
print(sensor_value[0])
mqttgassensor.publish("/Deutschland/Osnabrueck/Coffee-bike-1/Sensor_1", sensor_value[0][1])
mqttgassensor.loop_stop()
Hello, everyone. I want to accomplish two instances in this script.
Through the publish we can get the data from the gassensor. Because at the top of this script i have imported the serial modul with that we can accomplish the communication between the arduino and raspberry pi.
I want to use the subscribe to get the command(1 or 0) from the server. the number 1 can active the Relay and the 0 can deactive the relay.
I have tried to accomplish the two thoughts lonely and successfully. But the Combination gives me no reply.
You only need 1 instance of the MQTT client since there is only 1 broker. You can subscribe and publish to multiple topics from a single client.
You should connect this 1 instance and then start the network loop in the background with client.start_loop()
You can then run your own loop to read from the serial port.

python3.5 asyncio Protocol

I want to build a chat demo, but I can not receive the server-side things sent, in addition to the first time to start, anyone know why?
code from https://docs.python.org/3.4/library/asyncio-protocol.html#tcp-echo-client-protocol
Server.py
import asyncio
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
message = data.decode()
print('Data received: {!r}'.format(message))
print('Send: {!r}'.format(message))
self.transport.write(data)
loop = asyncio.get_event_loop()
# Each client connection will create a new protocol instance
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
# Serve requests until Ctrl+C is pressed
print('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Client.py
class EchoClientProtocol(asyncio.Protocol):
def __init__(self, message, loop):
self.message = message
self.loop = loop
self.transport = None
def connection_made(self, transport):
self.transport = transport
transport.write(self.message.encode())
print('Data sent: {!r}'.format(self.message))
# while 1:
# message=input('please input the message:')
# transport.write(message.encode())
# print('Data sent: {!r}'.format(message))
def data_received(self, data):
# print('data_received')
print('Data received: {!r}'.format(data.decode()))
while 1:
message = input('please input the message:')
self.transport.write(message.encode())
print('Data sent: {!r}'.format(message))
def connection_lost(self, exc):
print('The server closed the connection')
print('Stop the event loop')
self.loop.stop()
loop = asyncio.get_event_loop()
message = 'Hello World!'
coro = loop.create_connection(lambda: EchoClientProtocol(message, loop),
'127.0.0.1', 8888)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
result show:
cant not show 'Data received: '#####'
like 'def data_received(self, data)' is only used onece
anyone have solution?
[result][1]
[1]: https://i.stack.imgur.com/IoqA9.png
You created so-called blocking function from EchoClientProtocol.data_received(). Every delivered message from server can be delivered into EchoClientProtocol.data_received() only when the event loop can process it but blocking function prevents it.
This code
while 1: # More Pythonic way is While True
message = input('please input the message:')
self.transport.write(message.encode())
get message from user and send it to the server, until this moment it's all fine. In the next step it starts another loop but the code never get into the event loop (so the incoming message can't be processed).
You can edit the client code like this:
def data_received(self, data):
print('Data received: {!r}'.format(data.decode()))
message = input('please input the message:')
self.transport.write(message.encode())
The data_received in client is first called when you received Hello World! from server (it's the Hello World! sends from connection_made). Now the processing is following:
It prints received message (in the first call it's Hello World!)
Get new message from user
Send it to the server
The function returns and the control is given to the event loop.
The server received new message and send it back to client
The event loop on client call data_received
Go to the step 1

Resources