I have defined a bi-dir gRPC streaming RPC for exchanging some configuration information with the server.
Server: Implemented in C++
Client: Implemented in python 3.10
The client opens channel with server using below:
grpc.channel_ready_future(self.channel).result(timeout=5)
As and when configuration is changed in the client, it builds the gRPC message and yields the message to server by calling the bi-dir RPC (RPC name "Push"). The client never explicitly closes the channel by calling close. But, as soon as server receives the message, the channel gets reset, so the next message from client uses another channel.
My question is what am I doing wrong in the client and why channel gets closed? One thing is that server actually doesn't send anything back so if I have a for loop as below client code hangs:
responses = self.stub.Push(
make_client_msg(msg),timeout=timeout)
for response in responses:
logging.info("response is {}".format(response))
so I removed for loop and just printing responses, but the channel gets reset:
responses = self.stub.Push(
make_client_msg(msg),timeout=timeout)
logging.info("response is {}".format(responses))
I hope I was able to explain the problem without getting too much into details.
++++++++
Update:
I was able to solve the channel reset issue by creating two subprocess, first sync process builds gRPC message and pushes inside a message queue and second process which is async, reads from the queue and writes the message to gRPC channel in a while loop. I never call done_writing() after writing to gRPC channel which keeps my channel alive and I need not create a new channel for every write which is an expensive operation. I earlier implemented the entire thing using python iterator, but it was sending EOM at the end of iterator which reset my gRPC channel. Not many examples are there around this scenario but thanks for some helpful comments.
Related
I want to code a synchronous programs in which cloud send mqtt message to device, then using simple to wait response to judge whether is successed. but it need a timeout, such as 5 seconds, app think it's failed. The keepalive parameter of mqtt simple API seems to lose efficacy, but the big probability is that I use or understand the error.
I would very appreciate it if you guys can suggest me some advice
print("----before simple")
msg = subscribe.simple("paho/test/simple", hostname="39.100.79.76",port=1883,keepalive=5,will = {'topic': "paho/test/disconnect", 'payload':"network or device anomaly", 'qos':2, 'retain':0})
print("----after simple")
then run it, simple API cannot to end
----before simple
infinite...
Correctly determine if it is successful to synchronize the edge cloud application
You have miss understood what the keepalive property for a MQTT client is for.
The keepalive is used by the Broker to check if the client is still functioning. It does this by keeping a timer since the last MQTT packet was received from the client. If it does not receive a packet when the timer reaches the keepalive time it sends a MQTT Ping request to the client. If it doesn't receive a response to that packet with in half the keepalive time then is will disconnect the client and publish any Last Will & Testament message that the client may have set.
The Paho client library handles MQTT Ping messages in the background with no need for the user to be involved.
The code sample you have provided will wait indefinitely for a response.
I am having serious problems to make messages delivery fail proof in a chat system.
Having several node.js and live communication via websocket to the clients, I use rabbit to callback the correct consumer at a specific node.
I declare my queues as {durable: true, prefetch:1, expires: 2*3600*1000, autoDelete: true}
consumerOption is {noAck: false, exclusive: false}
Once I receive a message from the server, I callback the server, get the message, and use message.ack(false)
Sometimes, a message appears with a pendent ACK in rabbit and as I would expect, the consumers stop being callbacked.
Here is my overall strategy:
1- when socket disconnects, I recover the queue using queue.recover() during the the reconnection/connection (more frequent).
2- When I send a message to the server and not receive it back, I send a message to the server to recover the queue.
3- I use the socket callback function to send the ack confirmation. On the server, I use message.ack(false) The server keeps a hashmap {[ackCode: string]: RabbitMessage} and I send the ackCode back to the server, so it can retrieve the correct message and ack it.
5- If client is not receiving any message for 2 minutes, I ask to the server to recover the queue.
The step 5 should not exist but even with this step, sometimes I send a recover queue request to the server, the server executes the command, but nothing happens and chat is freeze.
These are very difficult events to debug. I am using a Typescript library which is 3 year without any commit and this could be one of the causes.
Regarding the strategy, is it correct? Any idea on what I could be facing?
What I've learned and why I think that I couldn't use rabbit to solve the specific problem mentioned in the original post.
The domain: A "chat" where the message order is very important (some are chains) and we must be sure that the message will be delivery if/when the client is online.
The problem: We have several node.js servers, sockets are spread among them. Sockets falls all time, and it is common to a client connection that was in the first server be connected again in another. We don't use cookies, session affinity by IP won't handle the issue.
Limitations: That being said, I can't activate a consumer that is currently activated in another server, so if a customer Queue is tied to server 1 I can't activate it in server 2. And all the messages that need to be sent are tied to this specific queue.
Another limitation is that I don't have an easy way to consume queues, re-queue, to know in advance how much not ack messages I have in the queue, aggregate them and bulk send them via socket.
The solution: I am no longer using {noAck: false} and I am controlling the ack in a Redis queue. Thus, I am using Rabbit as a pub-sub, to callback the correct consumers to send the message using the socket. Rabbit wake me up, first thing I do is to put the message at the end of a redis queue. When I send a message via socket, I always start sending the messages from the beginning of the queue, regardless of the message that have just woke me up. I send the message, wait for the callback event, If it is not ok, I re-queue the messages,
After decoupling the pub-sub from the queue/ack control, I can now easily change my rabbit pub/sub from one server to another (declaring using socket.id and no more with the client queue), with no concern of loosing any message. Also, now I am capable of much more advanced operations on my queue.
As my use case don't allow me to use the full power of exchanges/binds (i have complex routing rules), I am evaluating the possibility of changing from rabbit to redis pub/sub, but in this case, I would continue to differentiate pub/sub from the queue.
After more than a month trying to make rabbit working in this scenery, I think that I was using a good technology to the wrong use case. It is much simpler now.
The following situation:
Web client: Using JavaScript socketio to listen for incoming messages (= JavaScript).
Web server: Using flask-socketio with eventlet to send data (= Python).
Everything works if the client sends a message to the server. The server receives the messages. Example:
socketio = SocketIO(app, engineio_logger=True, async_mode="eventlet")
#socketio.on("mymsg")
def handle_event(message):
print("received message: " + str(message))
Unfortunately the other way around does not work - to some extent. I have a thread producing live data about 5 to 10 times a second the web frontend should display. It should be sent to the client.
First: It does not work at all if the thread producing the data tries to invoke sockeito.emit() directly. The reason for that is unclear to me but somehow plausible as flask-socketio with eventlet follows different async models, as the documentation says.
Second: Decoupling classic threads from the async model of flask/eventlet works to some extent. I attempt to use an eventlet queue for that. All status data my thread produces is put into the queue like this:
statusQueue.put(statusMsg)
This works fine. Debugging messages show that this is performed all the time, adding data after data to the queue.
As the documentation of flasks tells I'm adviced to use socketio.start_background_task() in order to get a running "thread" in a compatible mode to the async model socketio uses. So I am using this code:
def emitStatus():
print("Beginning to emit ...")
while True:
msg = statusQueue.get()
print("Sending status packet: " + str(msg))
socketio.emit("status", msg, broadcast=True)
statusQueue.task_done()
print("Sending status packet done.")
print("Terminated.")
socketio.start_background_task(emitStatus)
The strange thing where I'm asking you for help is this: The first call to statusQueue.get() blocks as expected as initially the queue is empty. The first message is taken from the queue and sent via socketio. Debug messages at the client show that the web client receives this message. Debug messages at the server show that the message is sent successfully. But: As soon as the next statusQueue.get() is invoked, the call blocks indefinitely, regardless of how many messages get put into the queue.
I'm not sure if this helps but some additional information: The socketio communication is perfectly intact. If the client sends data, everything works. Additionally I can see the ping-pongs both client and server play to keep the connections alive.
My question is: How can I properly implement a server that is capable of sending messages to the client asynchronously?
Have a look at https://github.com/jkpubsrc/experiment-python-flask-socketio for a minimalistic code example featuring the Python-Flask server process and a JQuery based JavaScript client.
(FYI: As these are status messages not necessarily every message needs to arrive. But I very much would like to receive at least some messages not only the very first message and then no other message.)
Thank you for your responses.
I left two solutions to make the code work as pull requests.
Basically, the answer is: you choose one technology and stick a process with it:
Going async_mode=threading? Great, use stdlib Queue. Don't import eventlet unless you have to.
Going async_mode=eventlet? Also great, use eventlet Queue and don't forget that stdlib time.sleep or socket IO will block everything else, fix with eventlet.monkey_patch()
If you must use both eventlet and threading, the best approach is to let them live in separate OS processes and communicate via local socket. It's extra work, but it is very robust and you know how it works and why it will not break.
With good knowledge of both eventlet and native threads you can carefully mix them into working code. As of 2018-09, mixing doesn't work in friendly obvious way, as you already found. Sorry. Patches are welcome.
Situation:
User has sent image, after image, he will send message. While the second user does not receive a picture, the message will not be sended.
How to send messages normally, like in normal chat?
I have found, that there are "async" module for node.js, but how to use it with Socket IO?
You could simply pass every messages in a queue. So each messages must wait for the first one to be send before passing to the next.
Although, here in your case. I don't think waiting for an image to be sent is wise - this will make your chat unresponsive.
Rather, use simple text image message. Once you receive this, put a placeholder in the chat where you'll load the image when you received it (displaying a loader meanwhile). This will allow you to continue the chat without being blocked by a long IO process to finish.
Socket.IO uses a single WebSocket connection which only allows for sending one item at a time. You should consider sending that image out-of-band on a separate WebSocket, or via another method.
I have a similar situation where I must stream continuous binary data and signaling messages. For this, I use BinaryJS to set up logical streams which are mirrored on both ends. One stream is used for binary streaming, and the other is used for RPC. Unfortunately, Socket.IO cannot use arbitrary streams. The only RPC library that seems to work is rpc-stream. The RPC functionality isn't nearly as powerful as Socket.IO (in particular when dealing with callbacks), but it does work well.
I am writing a Client, Server-based chat. The Server is the central component and handles all the incoming messages and outgoing messages. The clients are that chat users. They see the chat in a frame and can also write chat messages. These messages are sent over to the server. The server in turn updates all clients.
My problem is synchronisation of the clients. Since the server is multi-threaded, both messages can be received from clients and updates (in form of messages) have to be sent out aswell. Since each client is getting updated in in its own thread, there is no guarantee that all clients will receive the same messages. We have a snychronisation problem.
How do I solve it?
I have messed with timestamps and a buffer. But this is not a good solution again because there is no guarantee that after assigning a timestamp the message will be put into the buffer immediately afterwards.
I shall add that I do not know the clients. That is, I only have one open connection in each thread on the server. I do not have an array of clients or something like that to keep track of all the clients.
I suggest that you implement a queue for each client proxy (that's the object that manages the communication with each client).
Each iteration of your server object's (on its own thread) work:
1. It reads messages from the queues of all client proxies first
2. Decides if it needs to send out any messages based on its internal logic and incoming messages
3. Prepares and puts any outgoing messages to the queues of all its client proxies.
The client proxy thread work schedule is this:
1. Read from the communication.
2. Write to the queue from client proxy to server (if received any messages).
3. Read from the queue from server to client proxy.
4. Write to communication channel to client (if needed).
You may have to have a mutex on each queue.
Hope that helps