TCPSocket.send() doesn't send immediately? - firefox-os

I used TCPSocket.send() when I developed app on firefox os. But it doesn't work well. Server can't receive the data immediately, only when I shut simulator down, server can receive the data. It a problem of buffer?

Related

How to modify electron page with server data in real time coming from nodejs

I have a nodejs app that is acting as a server that's controlling multiple industrial machines and I want to make a dashboard with electron that presents real time data of the various machine's states (this information being stored on the server). How can I establish some sort of connection between my nodejs server and my electron application/dashboard (and update its contents accordingly)?
I have written a similar Electron app, in my case, an app that periodically interrogates an application over the network that is connected to / controls an HF Amateur radio via a raw Socket.
From the electron app's main.js, I start up a service that polls the radio control application over the Socket. In your case, I'm assuming the would be an http client.
When the response comes back, I use Electron's ipcRenderer to push the data from the main electron process to the GUI app, in your case, your dashboard.
The connection code is a bit complex, due to the need to reconnect automatically if the connection is dropped (e.g. the radio is turned off, and then turned back on), but for an example, you can have a look at my repo.

How to detect network failure in sockets Node.js

I am trying to write internal transport system.
Data should be transferred from client to server using net sockets.
It is working fine except handling of network issues.
If I place firewall between client and server, on both sides I will not see any error, so data will continue to fill kernel buffer on client side.
And if I will restart app in this moment I will lose all data in buffer.
Question:
Do we have any way to detect network issues?
Do we have any way to get data back from kernel buffers?
Node js exposes the low level socket api to you very directly. I'm assuming that you are using a TCP socket to send and receive data.
One way to ensure that there is an active connection between the client and server is to send heartbeat signals back and forth. If you fail to receive a heartbeat from the server while sending data, you can assume that the connection failed.
As for the second part of your question: There is no easy way to get data back from kernel buffers. If losing the data will be a problem, I would make sure to write it to disk.

Sending data from RabbitMQ to Node.JS via Socket.IO

I am going to design a system where there is a two-way communication between clients and a web application. The web application can receive data from the client so it can persist it to a DB and so forth, while it can also send instructions to the client. For this reason, I am going to use Node.JS and Socket.IO.
I also need to use RabbitMQ since I want that if the web application sends an instruction to a client, and the client is down (hence the socket has dropped), I want it to be queued so it can be sent whenever the client connects again and creates a new socket.
From the client to the web application it should be pretty straightforward, since the client uses the socket to send the data to the Node.JS app, which in turn sends it to the queue so it can ultimately be forwarded to the web application. From this direction, if the socket is down, there is no internet connection, and hence the data is not sent in the first place, or is cached on the client.
My concern lies with the other direction, and I would like an answer before I design it this way and actually implement it, so I can avoid hitting any brick walls. Let's say that the web application tries to send an instruction to the client. If the socket is available, the web app forwards the instruction to the queue, which in turn forwards it to the Node.JS app, which in turn uses the socket to forward it to the client. So far so good. If on the other hand, the internet connection from the client has dropped, and hence the socket is currently down, the web app will still send the instruction to the queue. My question is, when the queue forwards the instruction to Node.JS, and Node.JS figures out that the socket does not exist, and hence cannot send the instruction, will the queue receive a reply from Node.JS that it could not forward the data, and hence that it should remain in the queue? If that is the case, it would be perfect. When the client manages to connect to the internet, it will perform a handshake once again, the queue will once again try to send to Node.JS, only this time Node.JS manages to send the instruction to the client.
Is this the correct reasoning of how those components would interact together?
this won't work the way you want it to.
when the node process receives the message from rabbitmq and sees the socket is gone, you can easily nack the message back to the queue.
however, that message will be processed again immediately. it won't sit there doing nothing. the node process will just pick it up again. you'll end up with your node / rabbitmq thrashing as it just nacks a message over and over and over and over, waiting for the socket to come back online.
if you have dozens or hundreds of messages for a client that isn't connected, you'll have dozens or hundreds of messages thrashing round in circles like this. it will destroy the performance of both your node process and rabbitmq.
my recommendation:
when the node app receives the message from rabbitmq, and the socket is not available to the client, put the message in a database table and mark it as waiting for that client.
when the client re-connects, check the database for any pending messages and forward them all at that point.

Where goes those messages not yet received in Node.js?

For example we have a basic node.js server <-> client comunicaciton.
A basic node.js server who sends each 500ms a message to the only o every one client connected with their respective socket initiated, the client is responding correctly to the heratbeat and receiving all the messages in time. But, imagine the client has a temporal connection lag (without closing socket), cpu overload, etc.. And cannot process nothing during 2secs or more.
In this situation, where goes all those the messages that are not yet received by the client??
They are stored in node? in any buffer or similar?
And viceversa? The client is sending every 500ms a message to the server (the server only listens without responding), but the server has a temporary connection issue or cpu overhead during 2 or 3 secs..
Thanks in advice!! any information or aclaration will be welcomed
Javier
Yes, they are stored in buffers, primarily in buffers provided by the OS kernel. Same thing on the receiving end for connections incoming to a node server.

Detecting if the connection to the browser client is broken

I have a web site which uses a Long Poll to wait for the server to finish processing some data. However, a timeout might occur or the user might close his browser, yet the server is continuously processing it's data.
I want the server to stop processing data as soon as the Long Poll connection is broken. There's no client who will receive the data so there's no use for this long process to continue running... How to do this?
The server is working on adding files to a ZIP archive, which takes some time since these are reasonable big files. Once it's done, it will send the final ZIP file and close the connection. But if the client disconnected before the task is finished, the server should stop it's work and discard everything again...
You should consider using he SignalR framework. It offers very comfortable events like OnConnect() and OnDisconnect(). Under the hood it works with
WebSockets
Server Sent Events
Forever Frame
Long polling
It uses whatever is available with the given environment, starting with WebSockets.

Resources