WebSocket Close Code 1002 - node.js

I've written an application that uses web sockets to communicate between the server and client. I'd like to handle the case where the client is out of date (too old), and thus would misinterpret/not appropriately handle messages. My thinking is that if the client is too old, I close the connection and send the appropriate status code. I read the spec and it seems that 1002 may be the appropriate code:
1002 indicates that an endpoint is terminating the connection due
to a protocol error.
However, I don't actually know what that means (if it actually refers to the web socket protocol, and thus a lower level error). Is 1002 appropriate for this, or should I be making a custom (application) close code in the 4000-4999 region as defined here: https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent

The error code 1002 is for low-level WebSocket protocol violations. Like, the WebSocket message was a text message, but the payload contained invalid UTF8. You should not use that for the kind of situation. Errors 1002 are usually generated by the internals of a WebSocket implementation, not an application using WebSocket.
Now, in your situation, you have two options:
If the client can be identified as being "too old" already during the WebSocket opening handshake, you might fail the handshake using an HTTP Bad Request 400 already.
You might allow the handshake to complete, do some WebSocket message exchange determining the client version, and then close the connection (doing a proper WebSocket closing handshake) with an error code from the 4000-4999 range. Yes, that range would be appropriate. https://www.rfc-editor.org/rfc/rfc6455#section-7.4.2
The latter is more flexible and gives the client better feedback. In particular, JavaScript in browser will only get access to the close code (2), not any HTTP error in (1).
Another notable aspect: a conforming WebSocket implementation will simply not allow an application to trigger a close with 1002. The only close codes allowed for application use are 1000 (which is "normal" close) and 3000 - 3999 (app use, but registered at IETF) and 4000 - 4999 (app use, private unregistered).

Related

Frequent xhr request by socket.io

When I connect to the socket server from the client side, which is considered react, every few seconds a repeated request is sent by the socket client. Generally, the requests are of get type and most of the time they are in pending mode. Sometimes the result of requests is 2.
What do you think is the problem of sending repeated requests after connecting or doing anything with the socket?
UPDATE
This problem occurs when I use namespace . I tried all the solutions but this problem was not solved.
image
This is expected behavior when the option used for transport is polling (long-polling).
What happens is, by default, the transport parameter is ["polling", "websocket"] (client, server), where the sequence of elements matters. So, the first connection attempt is made via polling (which is faster to start compared to websocket), and then (or in parallel, I don't know the working details) there is a connection attempt by websocket (this takes a little longer to establish but is faster for later communication).
If the websocket connection is successfully established, the communication will be carried in this way. But if an error occurs, or the connection takes a long time to be established, or this transport option is not present in the instance's parameters, then the communication will continue being carried out through polling, which are the various requests that remain pending. It is normal for them to remain pending, so they receive an update and are able to inform the requester immediately, without the need for several quick requests consulting the application's status.
Check the instance parameters you set for this connection to find out if transport via websocket is enabled. Be careful when using the socket server behind a reverse proxy, as this reverse proxy needs to be properly configured to accept websocket connections, otherwise it won't work.
You can check the websocket requests in the browser inspection, Network tab, by enabling the WS filter.
Here are some additional links for you to read more about:
https://socket.io/docs/v4/how-it-works/
https://socket.io/docs/v4/using-multiple-nodes/
https://socket.io/docs/v4/reverse-proxy/
https://ably.com/blog/websockets-vs-long-polling

Does X11 have a lifesign or constant stream?

I have a fault tolerant application, where an X Server requests to start an Application on a remote client (by some other mechanism) and receive and display its X-window. Fault tolerance means that the server needs to detect loss of the connection to the client and then call a different back-up-client and start the application there and show the window.
My question is whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
So another assumption could be that the X11-stream constantly delivers some commands and the server could implement some logic like this: If the X11-stream does not deliver any X11 traffic for a timeout y (e.g. 3 seconds), we assume the connection is lost and actively close the window and establish the connection to the fall-back-client.
Is the assumption true? I did not see any such statement in the X11-protocol about how to detect connection loss. Is there any explicit lifesign that is regularly transmitted? Or is the assumption valid that there is constant traffic? Or could there be longer periods of inactivity where nothing is transmitted at all while the connection is perfectly up and running?
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
I have a fault tolerant application, where an X Server needs to start an Application...
I don't think that an X server can "start an application". May be that some setup allows something similar to that, but normally is not so.
...whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
No, it does not exist. The X11 protocol is based on TCP/IP, which does not provide directly this "heartbeat". I think the assumption is that, if you click or otherwise stimulate an X11 window, the TCP layer will timeout or throw another error if the client application is gone.
I did not see any statement in the X11-protocol about how to detect connection loss.
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
Maybe that some application uses that NoOperation, but the purpose would be different from what you need. I mean, the X11 server is like an extension from the point of view of an application; the application can have interest to know whether the server is up and working, but it is not true the contrary. And, anyway, even if the server could detect that the application is gone, probably there is no way to tell the server to launch another application.
Probably a special proxy could be deployed; it could launch the application and monitor the connection (in both ways) and take the required steps in case the application goes away. But then again, who would monitor the proxy application?
First of all, X Protocol relies completely on TCP to send/receive information.
You cannot safely put a timeout capable transaction in order to detect a timeout in TCP. TCP is designed to retransmit only those segments that have already been sent but no acknowledged. It is completely asynchronous, in the sense that you send a command, and you can receive many responses or events unrelated to that command, before you receive the response. There's no heartbeat mechanism on XProtocol (except that the NOOP command is sent to synchronize operations with the server, and you receive a response for it, but you cannot overuse it, as that slows down severely the X connection, just launch any client with the -synchronous option to see it, see X(7)). You can even have TCP connections alive for years without interchanging a single packet. There's some mechanism, activated by option SO_KEEPALIVE that makes tcp to employ such heartbeat on TCP for a connection that has no data to transmit, but the X11 protocol normally doesn't make use of it. You don't post any code, nor a description of how the system is configured. The standard XServer never starts a connection by itself, except when launched specifically to negotiate with an XDMCP server (and this is done on UDP protocol) to serve as an XTerminal.
From your words probably you don't know that the roles of server and client are exchanged in X Protocol (the client is the remote application that connects to the server to display its output, and the server is the application that controls your display, mouse and keyboard) There's no means for the server to create a new client, so you need to be creating this connection in other means (probably through SSH, but not described).
By the way, when you say:
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
That is not OS-dependent. It is precisely the standard behaviour when you don't have traffic to send, there's no packet exchanged, so no detection is made (except if your client ---remember, this is the remote application program that wants to show its data in your local server--- activates the SO_KEEPALIVE option, and it requires several losses before declaring a lost connection) In your case the amount of time is variable because timers don't start until there's some data sent over the unplugged connection, and this makes it variable (not OS dependant)
On other side, you cannot pretend the server is going to turn on your monitor in case you leave the office and turn it off by mistake or by accident. What is the fault tolerance specification in that case?
IMHO, in regard of the presentation protocol, the application should be ready to show you as much information about the system as soon as you activate the connection (but the connection must be something allowed to fail). What is important is the means you develop for the application to be fault tolerant, even in the case you are not there to see the display. Will be somebody be advised that no one is looking at the screen? Are you going to detect the absence of operators in that case? Don't take this as a flame, but common sense should imperate in this case.
In case you need to ensure the connectivity to the remote host is available, you need to use another means to check for it. I recommend you to have a simple application pinging the remote host and alerting in case you don't get a positive result. Or you can open a connection to the server and then close it as soon as you get a positive response from the server (the first packet, for example) This will lead us to the next step, that is to ensure that some human is looking at the (turned on) screen of the display :)
For example, you can run a client in parallel to the one you are interested in, and force a heartbeat by asking for some server atom name (or a root window property value) in a loop with some delay. This will make the connection fail or your client can alert in case it doesn't receive the answer in some configurable time.

Communicating with an unsecure device: Security by Abstraction Vs HTTP HTTPS callback

I have a web-server with an SSL certificate, and an unsecured device on a GSM/GPRS network (arduino MKR GSM 1400). The MKR GSM 1400 library does not feature a SSL server, only an SSL Client. I would prefer to use a library if that's possible, but I don't wanna write a SSL Server class. I am considering writing my own protocol, but I'm familiar with HTTPS and will make writing the interface on the webserver side easier.
The GSM Server only has an SSL Client
I am in control of both devices
Commands are delivered by a text string
Only the webserver has SSL
My C skills are decent at best
I need the SSL Server to be able to send commands to the Arduino Device, but I want these commands to be secured (The arduino device opens and closes valves in a building).
The other option would maybe have some sort of PSK, but I wouldn't know where to start on that. Is there an easy function to encrypt and decrypt a "command string". I also don't want "attackers" to be sending commands that I've sent before.
My Basic question is, does this method provide some reasonable level of security? Or is there some way to do this that I'm not thinking of.
While in a perfect world there would be a better approach, you are currently working within the limits of what your tiny system provides.
In this situation I find your approach reasonable: the server simply tells the client using an insecure transport that there is some message awaiting (i.e. sends some trigger message, actual payload does not matter) and the client then retrieves the message using a transport which both protects the message against sniffing and modification and also makes sure that the message actually came from the server (i.e. authentication).
Since the trigger message from the server contains no actual payload (arrival of the message itself is enough payload) an attacker could not modify or fake the message to create insecure behavior in the client. The worst what could happen is that some attacker will either block the client from getting the trigger messages or that the attacker fakes trigger messages even though there is no actual command waiting from the server.
If the last case is seen as a problem it could be dealt with a rate limit, i.e. if server did not return any command although the client received a trigger message than the client will wait some minimum time before contacting the server again, no matter if a trigger message was received or not. The first case of the attacker being able to block messages from the server is harder to deal with since in this case the attacker is likely able to block further communication between client and server too - but this is a problem for any kind of communication between client and server.

ZeroMQ IPC Unix Domain Socket accessed by nc

I want to connect to Unix Domain Socket created by ZeroMQ (IPC model) via command nc. I can connect, but when I sending some messages then, my deamon, which is listening to this socket, is not getting any message...
I'm using nc like:
nc -U /path/to/socket
Very well, here's a longer version.
ZeroMQ implements a message queue transport system over the top of stream connections like sockets, named pipes, etc. To do this it runs a protocol called ZMTP over the top of the stream, which provides all the message demarcation, communication patterns, and so forth. It also has to deal with protocol errors in order to give itself some resiliency.
Comparison to a Web Browser
It's the same idea to a web browser and web server communicating using http over a socket. Http is used to transport html files. If you look at the data flowing over the socket you see the html mixed up with the messages involved in running the http protocol. And because http is a text based protocol, it looks kinda OK to the human eye.
Talking the Same Language
Thus when a program that uses the zmq libraries for communication connects a socket / named pipe / etc, it will be expecting to exchange data over that connection in the way defined by the ZMTP protocol (in the same way a web browser is expecting to talk to a server using http). If the program at the other end is also using zmq, then they're both talking the same protocol and everything is good.
Incompatible Protocols
However, if you connect a program that doesn't of itself use the ZMTP protocol such as a web browser, and that sends a http request, it's unlikely to mean anything. And the zmq library routines will no doubt receive the bytes that make up the http request, attempt to interpret it, fail to understand it, and ultimately reject it as garbage.
Similarly if the program that uses the zmq library wants to send messages, nothing will happen unless the underlying ZMTP protocol driver is content that it is communicating with something else that talks ZMTP. If anything at all emerges from netcap, it won't look anything like the message you were sending (it'll be jumbled up with the bytes that ZMTP uses).
Human Equivalent
The equivalent is an Englishman called Bob picking up the phone and dialling the number for his English friend called Alice living in Paris. However, if a Frenchman called Charlie answers the phone by mistake (wrong number), it'll be very difficult for them to exchange information. Meanwhile Eve, who's tapped the phone line, is laughing her head off at the ineptitude of these two people's failed attempt to communicate. (I make sweeping and partly justifiable generalisations about us Englishmen's poor ability to speak any other language).
Way Forward
There's a ZMQ binding available for almost everything, possibly even bash. Whatever it is you're trying to accomplish it's probably well worth while getting a decent binding of ZMQ for the programming or scripting language your using, and use that to provide a proper ZMQ endpoint.

Identifying remote disconnection in socket client

How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.

Resources