SocketIO can't send new message until first will be delivered - node.js

Situation:
User has sent image, after image, he will send message. While the second user does not receive a picture, the message will not be sended.
How to send messages normally, like in normal chat?
I have found, that there are "async" module for node.js, but how to use it with Socket IO?

You could simply pass every messages in a queue. So each messages must wait for the first one to be send before passing to the next.
Although, here in your case. I don't think waiting for an image to be sent is wise - this will make your chat unresponsive.
Rather, use simple text image message. Once you receive this, put a placeholder in the chat where you'll load the image when you received it (displaying a loader meanwhile). This will allow you to continue the chat without being blocked by a long IO process to finish.

Socket.IO uses a single WebSocket connection which only allows for sending one item at a time. You should consider sending that image out-of-band on a separate WebSocket, or via another method.
I have a similar situation where I must stream continuous binary data and signaling messages. For this, I use BinaryJS to set up logical streams which are mirrored on both ends. One stream is used for binary streaming, and the other is used for RPC. Unfortunately, Socket.IO cannot use arbitrary streams. The only RPC library that seems to work is rpc-stream. The RPC functionality isn't nearly as powerful as Socket.IO (in particular when dealing with callbacks), but it does work well.

Related

How do I send multiple request to let the client know that upload and processing has finished?

I'm trying to figure out how I can send multiple requests to let the client know that uploading and processing has finished.
For example: Let the client know that upload has finished, and processing has started. When processing has been finished, send another request to notify the client.
I only want to know the proper functions to use because it seems that sending two res.write()'s wont send until I call res.end()...
You actually want to inform the client about updates that happen asynchronously on the server / in the backend.
From the top of my head, there are a few ways to achieve getting asynchronous information to the client:
Open a stream and push updates via the stream to the client (for example, using server-sent-events)
Open a websocket and push your messages to the client (you need to manage the HTTP and websocket connections to write to the correct one)
Create a new route to let clients subscribe to information about the status of a job (some example code)
I'd select one of the solutions depending on your current client design. If you have something like websockets already available, I'd use that - but it's quite something to setup, if you don't. Streaming might not work cross-browser, but is quite easy to build. For the third option, you probably need more housekeeping in order to not create some memory leaks if a client disconnects.

How to asynchronously send data with socketio to a web client?

The following situation:
Web client: Using JavaScript socketio to listen for incoming messages (= JavaScript).
Web server: Using flask-socketio with eventlet to send data (= Python).
Everything works if the client sends a message to the server. The server receives the messages. Example:
socketio = SocketIO(app, engineio_logger=True, async_mode="eventlet")
#socketio.on("mymsg")
def handle_event(message):
print("received message: " + str(message))
Unfortunately the other way around does not work - to some extent. I have a thread producing live data about 5 to 10 times a second the web frontend should display. It should be sent to the client.
First: It does not work at all if the thread producing the data tries to invoke sockeito.emit() directly. The reason for that is unclear to me but somehow plausible as flask-socketio with eventlet follows different async models, as the documentation says.
Second: Decoupling classic threads from the async model of flask/eventlet works to some extent. I attempt to use an eventlet queue for that. All status data my thread produces is put into the queue like this:
statusQueue.put(statusMsg)
This works fine. Debugging messages show that this is performed all the time, adding data after data to the queue.
As the documentation of flasks tells I'm adviced to use socketio.start_background_task() in order to get a running "thread" in a compatible mode to the async model socketio uses. So I am using this code:
def emitStatus():
print("Beginning to emit ...")
while True:
msg = statusQueue.get()
print("Sending status packet: " + str(msg))
socketio.emit("status", msg, broadcast=True)
statusQueue.task_done()
print("Sending status packet done.")
print("Terminated.")
socketio.start_background_task(emitStatus)
The strange thing where I'm asking you for help is this: The first call to statusQueue.get() blocks as expected as initially the queue is empty. The first message is taken from the queue and sent via socketio. Debug messages at the client show that the web client receives this message. Debug messages at the server show that the message is sent successfully. But: As soon as the next statusQueue.get() is invoked, the call blocks indefinitely, regardless of how many messages get put into the queue.
I'm not sure if this helps but some additional information: The socketio communication is perfectly intact. If the client sends data, everything works. Additionally I can see the ping-pongs both client and server play to keep the connections alive.
My question is: How can I properly implement a server that is capable of sending messages to the client asynchronously?
Have a look at https://github.com/jkpubsrc/experiment-python-flask-socketio for a minimalistic code example featuring the Python-Flask server process and a JQuery based JavaScript client.
(FYI: As these are status messages not necessarily every message needs to arrive. But I very much would like to receive at least some messages not only the very first message and then no other message.)
Thank you for your responses.
I left two solutions to make the code work as pull requests.
Basically, the answer is: you choose one technology and stick a process with it:
Going async_mode=threading? Great, use stdlib Queue. Don't import eventlet unless you have to.
Going async_mode=eventlet? Also great, use eventlet Queue and don't forget that stdlib time.sleep or socket IO will block everything else, fix with eventlet.monkey_patch()
If you must use both eventlet and threading, the best approach is to let them live in separate OS processes and communicate via local socket. It's extra work, but it is very robust and you know how it works and why it will not break.
With good knowledge of both eventlet and native threads you can carefully mix them into working code. As of 2018-09, mixing doesn't work in friendly obvious way, as you already found. Sorry. Patches are welcome.

node.js net module pings and messages not happening

I have two node.js applications running side by side on my server and I wan't to send server-side messages between them in a light weight manner using native node.js (v0.10.33) module net
I intend the first application to send messages to the second. I see the console log listening...,
In the first application:
var push='';
var net=require('net');
var server=net.createServer(function(p){
p.on('error',function(err){console.log(err);});
push=p;
setInterval(function(){push.write(JSON.stringify({'f':'ping','data':'stay alive'}));},1000);
});
server.listen(8008,function(){console.log('listening...')});
//a real message might be sent later in the application (this example would need a setTimeout)
push.write(JSON.stringify({'f':'msg','data':'Hello World'}));
In the second application I see the console log open
var net=require('net');
var pull=new net.Socket();
pull.connect(8008,'127.0.0.1',function(){console.log('open');
pull.on('data',function(_){
_=JSON.parse(_);
if(_.f==='ping'){console.log('!!!ping!!!');}
else{console.log(_.data);}
});
pull.on('error',function(err){console.log('pull: '+err);});
});
I do not see any other activity though (no pings, and later after the open event, no hello world) and no errors.
If I inspect with console.dir(pull) I don't see events for accepting data ie: ondata or onmessage
What is wrong?
Unfortunately, I must point out that this messaging scheme is fundamentally broken. You're using TCP, which provides a stream of bytes, not messages.
Despite the fact that TCP sends its data over IP packets, TCP is not a packet protocol. A TCP socket is simply a stream of data. Thus, it is incorrect to view the data event as a logical message. In other words, one socket.write on one end does not equate to a single data event on the other. A single data event might contain multiple messages, a single message, or only part of a message.
The good news is this is a problem already solved many times over. I'd recommend either:
Using a library meant for passing JSON messages over TCP.
Using something like redis as a pub-sub messaging solution (this option makes your app much easier to scale)
If you know that your two apps will always run on the same machine, you should use node's built-in IPC mechanism.

How can I make socketio wait before sending the next event?

So I have a socket.io server which works well. It's very simple: kind of mimicking screen sharing as it broadcasts one clients position on the page to the other, who catches it and moves to said location etc. All of this works fine, but because of the way I'm catching movement, its possible (and quite common) for it to be sent too many messages at once, making it impossible for the other client to keep up.
I was wondering if there is a way to make socket.io 'sleep' or 'wait' for a certain interval, ignore the messages sent during this interval without returning an error, and then begin listening again?
It is feasible to implement this in each client (and this may be the better option), but I just wanted to know if this is possible on the server side too.
Use volatile messages. If there are too much messages, they will just be dropped to go again with real time messages.
socket.volatile.emit('msg', data);
From socket.io website :
Sending volatile messages.
Sometimes certain messages can be dropped. Let's say you have an app that shows realtime tweets for the keyword `bieber`.
If a certain client is not ready to receive messages (because of network slowness or other issues, or because he's connected through long polling and is in the middle of a request-response cycle), if he doesn't receive ALL the tweets related to bieber your application won't suffer.
In that case, you might want to send those messages as volatile messages.

Creating a simple Linux API

I have a simple application on a OpenWRT style router. It's written in C++ currently. The router (embedded Linux) has very limited disk space and RAM. For example there is not enough space to install Python.
So, I want to control this daemon app via the network. I have read some tutorials on creating sockets, and listening to the port for activity. But I haven't been able to integrate the flow into a C++ class. And I haven't been able to figure out how to decode the information received, or how to send a response.
All the tutorials I've read are dead ends, they show you how to make a server that basically just blocks until it receives something, and then returns a message when it got something.
Is there something a little more higher level that can be used for this sort of thing?
Sounds like what you are asking is "how do I build a simple network service that will accept requests from clients and do something in response?" There are a bunch of parts to this -- how do you build a service framework, how do you encode and decode the requests, how do you process the requests and how do you tie it all together?
It sounds like you're having problems with the first and last parts. There are two basic ways of organizing a simple service like this -- the thread approach and the event approach.
In the thread approach, you create a thread for each incoming connection. That thread reads the messages (requests) from that connection (file descriptor), processes them, and writes back responses. When a connection goes away, the thread exits. You have a main 'listening' thread that accepts incoming connections and creates new threads to handle each one.
In the event approach, each incoming request becomes an event. You then have event handlers that processes these events, sending back responses. Its important that the event handlers NOT block and complete promptly, otherwise the service may appear to lock up. Your program has a main event loop that waits for incoming events (generally blocking on a single poll or select call) and reads and dispatches each event as appropriate.
I installed python-mini package with opkg, which has socket and thread support.
Works like a charm on a WRT160NL with backfire/10.03.1.

Resources