I have problem detecting the loss of socket connection in CF app for PDA device.
I have static class that has static methods for communication (Connect(), Write(), Disconnect()). Static because all forms can call Write method.
In Connect method i call socket.Connect(ipEndpoint);
But when device hasn't got wifi connection program halts at this line for about 20 s which is too long. Also if user starts Write() method (saving some data) and wifi connection is lost, user cannot interact with form and thinks that application frizzed. Since there is no timeout option for CF socket connection, what is the best way to control socket behavior?
My idea is to show some kind of "Communication form" when socket doesn't response for 5 seconds which will try to reestablish connection. This form will have graphical indicator (rotating clock or something like that) to show user that program is trying to connect and exit button if user decides to exit app. If socket.connect succeeds, i will show last used form to user.
I assume that this has to be done with Threads, but since i don't have experience with it. i need help how to manage this behavior.
You can call Socket.BeginConnect() to launch the connect in the background. You can then specify the callback method that will get invoked when the socket has connected (or timed-out). Additionally, to implement your progress bar counting down as it tries to connect you can do:
IAsyncResult ar = moSocket.BeginConnect(...)
And then you can have your connection form use a timer to count down, checking the status of the connection by calling:
ar.IsComplete
Polling is not very efficient, but in this case it works well with your described pop-up connection form.
Related
I have a fault tolerant application, where an X Server requests to start an Application on a remote client (by some other mechanism) and receive and display its X-window. Fault tolerance means that the server needs to detect loss of the connection to the client and then call a different back-up-client and start the application there and show the window.
My question is whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
So another assumption could be that the X11-stream constantly delivers some commands and the server could implement some logic like this: If the X11-stream does not deliver any X11 traffic for a timeout y (e.g. 3 seconds), we assume the connection is lost and actively close the window and establish the connection to the fall-back-client.
Is the assumption true? I did not see any such statement in the X11-protocol about how to detect connection loss. Is there any explicit lifesign that is regularly transmitted? Or is the assumption valid that there is constant traffic? Or could there be longer periods of inactivity where nothing is transmitted at all while the connection is perfectly up and running?
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
I have a fault tolerant application, where an X Server needs to start an Application...
I don't think that an X server can "start an application". May be that some setup allows something similar to that, but normally is not so.
...whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
No, it does not exist. The X11 protocol is based on TCP/IP, which does not provide directly this "heartbeat". I think the assumption is that, if you click or otherwise stimulate an X11 window, the TCP layer will timeout or throw another error if the client application is gone.
I did not see any statement in the X11-protocol about how to detect connection loss.
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
Maybe that some application uses that NoOperation, but the purpose would be different from what you need. I mean, the X11 server is like an extension from the point of view of an application; the application can have interest to know whether the server is up and working, but it is not true the contrary. And, anyway, even if the server could detect that the application is gone, probably there is no way to tell the server to launch another application.
Probably a special proxy could be deployed; it could launch the application and monitor the connection (in both ways) and take the required steps in case the application goes away. But then again, who would monitor the proxy application?
First of all, X Protocol relies completely on TCP to send/receive information.
You cannot safely put a timeout capable transaction in order to detect a timeout in TCP. TCP is designed to retransmit only those segments that have already been sent but no acknowledged. It is completely asynchronous, in the sense that you send a command, and you can receive many responses or events unrelated to that command, before you receive the response. There's no heartbeat mechanism on XProtocol (except that the NOOP command is sent to synchronize operations with the server, and you receive a response for it, but you cannot overuse it, as that slows down severely the X connection, just launch any client with the -synchronous option to see it, see X(7)). You can even have TCP connections alive for years without interchanging a single packet. There's some mechanism, activated by option SO_KEEPALIVE that makes tcp to employ such heartbeat on TCP for a connection that has no data to transmit, but the X11 protocol normally doesn't make use of it. You don't post any code, nor a description of how the system is configured. The standard XServer never starts a connection by itself, except when launched specifically to negotiate with an XDMCP server (and this is done on UDP protocol) to serve as an XTerminal.
From your words probably you don't know that the roles of server and client are exchanged in X Protocol (the client is the remote application that connects to the server to display its output, and the server is the application that controls your display, mouse and keyboard) There's no means for the server to create a new client, so you need to be creating this connection in other means (probably through SSH, but not described).
By the way, when you say:
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
That is not OS-dependent. It is precisely the standard behaviour when you don't have traffic to send, there's no packet exchanged, so no detection is made (except if your client ---remember, this is the remote application program that wants to show its data in your local server--- activates the SO_KEEPALIVE option, and it requires several losses before declaring a lost connection) In your case the amount of time is variable because timers don't start until there's some data sent over the unplugged connection, and this makes it variable (not OS dependant)
On other side, you cannot pretend the server is going to turn on your monitor in case you leave the office and turn it off by mistake or by accident. What is the fault tolerance specification in that case?
IMHO, in regard of the presentation protocol, the application should be ready to show you as much information about the system as soon as you activate the connection (but the connection must be something allowed to fail). What is important is the means you develop for the application to be fault tolerant, even in the case you are not there to see the display. Will be somebody be advised that no one is looking at the screen? Are you going to detect the absence of operators in that case? Don't take this as a flame, but common sense should imperate in this case.
In case you need to ensure the connectivity to the remote host is available, you need to use another means to check for it. I recommend you to have a simple application pinging the remote host and alerting in case you don't get a positive result. Or you can open a connection to the server and then close it as soon as you get a positive response from the server (the first packet, for example) This will lead us to the next step, that is to ensure that some human is looking at the (turned on) screen of the display :)
For example, you can run a client in parallel to the one you are interested in, and force a heartbeat by asking for some server atom name (or a root window property value) in a loop with some delay. This will make the connection fail or your client can alert in case it doesn't receive the answer in some configurable time.
I'm using node.js, websockets/ws.
In my site sometimes a random client loses connection without losing connection to other internet stuff. Less then a second later they connect back, with a new socket. (There is a code that calls socket.onClose, which tries to reconnect back to server)
On the server side I can't see or log anything wrong. Everything looks like a normal disconnect, same as closing the browser tab.
I am guessing the reason is either socket related or client related but I don't know where to begin to debug this problem.
I got ping/pong responses with 60 second timer, this isn't it. The user usually loses connection while active.
How can I debug this problem and find the reason?
I keep all the session info, data, within the socket and that is why I do not want people to lose their connection.
Thanks
I have written a service for handling sip call. I want to make some additional feature to restrict a call time by configuring a fix time or handling the call time value by sending with some parameter.
Once a sip call got established generally it will be terminate with the end users response as CANCEL or BYE but before that if i want to restrict it to some fixed time, is it possible ? Once a sip call got established if i've fixed a time as 5 minutes so even if the end user doesn't will to end the call, the call should be terminate automatically after 5 minutes.
I've gone through expires header which doesn't seems to be helpful on this.
One option is this can be done at client. In the client we can have configurable timer. Once the call starts start the same and when it fires terminated the call.
Other option is to do at the server side where server does the same.
Yes, it's possible. It's usually better to do that on the server side as it's usually tied to charging. Like if user runs out of money terminate the call.
You need your call to go through a Back to Back User Agent Application which will handle that. The B2BUA which will start a timer which after it expires, will be responsible for sending the BYE to both legs to terminate the call. You can do this on Restcomm SIP Servlets, there is an example of a B2BUA at https://github.com/RestComm/sip-servlets/tree/master/sip-servlets-examples/call-forwarding.
If you want to do this on the client, you will need to have control over the client code and implement a similar logic, ie starts a timer that will send the BYE when it expires.
I am using webrtc.io to create the socket connections for my audio, video chat application. I want to preserve all the socket connection so that I can send updates to all the end users when the node.js server is restarted.
I am using Mongodb as the database for this application. Is there any way to store in the database and retrieve it back when the server is restarted?
I'm going to give you a common life situation to explain this.
Suppose you have a mobile phone that you cannot make calls from and you can only receive calls.
Someone calls you and you can talk to them, messages pass backwards and forwards on a constant connection. This was better than SMS because you could only respond to an SMS that was sent to you as well but now you have this constant connection to talk freely on.
Now in those statements I just described what Websockets are and the difference between that an Http. Next I'll apply this to what you are asking.
Now suppose on this phone where you can only keep talking on calls you receive from someone else, your battery runs out. You find a power source to plug into and get your phone working again. So do you expect your phone to just suddenly re-establish the call that dropped when your battery ran out?
You do not initiate the connection you are talking about. So you cannot "make the call back" or "re-establish the call". This is a strictly "the customer calls you" scenario.
The best you can do is maintain the session state to the subsequent re-connection "picks up where you left off". But on a hang-up the client has to call you back.
For better availabilty you need to proxy the connection and share over multiple application server nodes, all with access to the same session state.
How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.