I'm using Faye, which uses websockets.
The rate at which my server sends packets is very consistent. Its average deviation is less than a millisecond. If I run the client on the same machine, the client's average deviation is pretty close too. However, when I run the client on another machine, something bad happens when I up the frame rate.
When I emit at 5 frames per second, the other party receives them at about the right rate. When I increase that to 10 fps, the client will receive one frame in 200ms and the next just 1ms later, like every other packet is bundled with the previous one. When I increase it to 20fps, most of the packets appear bundled this way: I'll get one in 300ms followed by four or so more in 1ms. It's as if all I can get is 5fps, and asking for more just sends things in bundles at 5fps.
Is it possible to prevent this bundling and get my packets at a consistent rate? Is more than 5fps too much to hope for? Is this a limitation in Faye or Websockets or TCP in general?
Related
I am making a tracking system and i would like to know, if i have 1000 cars (clients) transmitting via sockets(tcp) at an interval of 5 seconds. Should the client open ,send then close the socket. Or should client keep the socket open though out as it transmits.
Depends on many things. For example, if there is a maximum number a server can handle sockets at same time, then you better close them in case you are going to have lots of requests. At the same time, if a live and fast connection really matters to you (1 request per 5 sec is normal, not too high not too low in my opinion) then live socket connections are better for you. Note that they also give you power in server side to broadcast messages to clients at any times, while with none persistent connections you have to broadcast messages as response to each 5 second request.
The tags you used suggests me you are trying to choose between websocket or HTTP. Finally, I should clarify that it really depends on your needs. With HTTP you can serve your logic to more clients, while with websocket you have to deal with server loads a little harder while you have advantage of sending messages to clients and faster tracking, and handshake just happens once.
I have connected two Linux Machines using netcat over WLAN using Server-Client design. And now i am able to send and receive messages between them. On the server i use UDP socket creation :
$ nc -u -l 3333
and on the client side i connect to the port using the port number and destination IP :
$ nc -u 192.168.178.160 3333
This leads to a bi-directional connection between server and client. One couldn't tell, but i guess it is quite Real-Time.
now i want to develop the functionality and try and establish a real-time speech connection between the two sides. Recording via Microphones is also feasible through arecord commands which write the speech data to a .wav file . Transmission of the .wav file is possible, only after it has been fully recorded but this is of no use since what is desired, is a Real-Time communication. Of course the received speech signals have to be instantly played back on the other end.
Has anyone any idea how to make it Real-Time?
Fidelity means a large buffer count to preserve sound continuity despite network latency and latency variation, low sound delay approximating to real time means a small buffer count to reduce overall latency. You cannot have both.
IME, you need to keep ~250ms max. of sound buffered at both ends to maintain an illusion of 'real time' speech. This queue of buffers needs to be emptied at the fixed rate necessary to reproduce the speech and kept topped-up by the network protocol as necessary. If the network latency is too low to top up buffer pools of that size, the buffer pool has to be made larger, the queue longer and the perceived real-time performance will suffer.
The TCP/UDP issue is a red-herring on most network connections.
Just be thankful that you are not streaming video:)
I am developing a node.js service that will have a number of requests per second - say 1000. Lets imagine that the response data weights a bit, connection with our clients is extremely slow and it takes ~1s for the response to be sent back to the client.
Question #1 - I imagine if there was no proxy buffering, it would take node.js 1000 seconds to send back all responses as this is blocking operation, isn't it?
Question #2 - How nginx buffers (and buffers in general) work? Would I be able to receive all 1000 responses to buffer (provided RAM is not a problem) and only then flush them the clients? What are the limits of proxy_buffers? Can I set a number of buffers to 1000 1K each?
The goal is to flush all the responses out of the node.js as soon as possible in order not to block it and have some other system to deliver them.
Thanks!
Of course, sending the response is non-blocking operation. Node simply gives a chunk to a network driver, leaving all the other work to your OS.
If sending the response was a blocking operation, it would only take a single PC with its network artificially crippled down to DoS any node-based service.
As a hypothetical example, let's say that I wanted to make an application that displays peoples twitter networks. I would provide an API that would allow a client to query on a single username. That user's top x tweets would be sent to the client. Then, each person that had been mentioned by the initial person would be scanned. Their top x tweets would be sent to the client. This process would recursively continue, breadth-first, until a pre-defined depth was reached. The client would be receiving the data in real time, displaying statistics such as number of users scanned, number of known users remaining to scan, and a growing list of the tweet data. None of the processing is complicated (regex of small amounts of text), but many, many network requests would be spawned from a single initial request.
I really want the fantastic realtime capabilities of node.js with socket.io, but I feel like this is an abuse of those technologies - they're not meant for heavy server-side lifting. Is there a more appropriate toolset for what I am trying to accomplish, or a particular way to use these tools to that end? Milewise is doing something similar-ish, but I think that my application would consume significantly more network resources than theirs.
Thanks.
The best network transport which you can get on the web now are WebSockets which offers persistent bi-directional real-time connection between server and client. Although not every browser supports them, socket.io gives you a couple of fallback solutions which may however decrease the network performance when compared to WebSockets as stated in this article:
During making connection with WebSocket, client and server exchange
data per frame which is 2 bytes each, compared to 8 kilo bytes of http
header when you do continuous polling.
...
Reducing kilobytes of data
to 2 bytes…and reducing latency from 150ms to 50ms is far more than
marginal. In fact, these two factors alone are enough to make
WebSocket seriously interesting to Google.
Apart from network transport, other things may also be important, for example how are you fetching, formating and processing the data on the server side. In node.js heavy CPU bound computations may block processing of other asynchronous operations, therefore these kind of operations should be dispatched to separate threads or processes in order to prevent blocking.
small question. How can I calculate the ping of a WebSocket connection?
The server is set up using Node.js and node-websocket-server, if that matters at all.
There is few ways. One that is offered by Raynos - is wrong. Because client time and server time are different, and you cannot compare them.
Solution with sending timestamp is good, but it has one issue. If server logic does some decisions and calculations based on ping, then sending timestamp, gives risk that client software or MITM will modify timestamp, that way it will give another results to server.
Much better way, is sending packet to client with unique ID, that is not increment number, but randomized. And then server will expecting from client "PONG" message with this ID.
Size of ID should be same, I recommend 32bits (int).
That way server sends "PING" with unique ID and store timestamp of the moment message sent, and then waiting until it will receive response "PONG" with same ID from Client, and will calculate latency of round-trip based on stored timestamp and new one on the moment of receiving PONG message.
Don't forget to implement case with timeout, to prevent lost PING/PONG packet stop the process of checking latency.
As well WebSockets has special packet opcode called PING, but example from post above is not using this feature. Read this official document that describes this specific opcode, it might be helpful if you are implementing your own WebSockets protocol on server side: https://www.rfc-editor.org/rfc/rfc6455#page-37
To calculate the latency you really should complete the round-trip. You should have a ping message that has a timestamp in it. When one side or the other receives a ping it should change it to a pong (or gnip or whatever) but keep the original timestamp and send it back to the sender. Then the original sender can compare the timestamp to the current time to see what the roundtrip latency is. If you need the one way latency divide by 2. The reason you need to do it this way is that without some very sophisticated time skew algorithms, the time on one host vs another is not going to be comparable at small time deltas like this.
Websockets have a ping type message which the server can respond to with a pong type message. See this for more info about websockets.
You can send a request over the web socket with Date.now() as data and compare it to Date.now() on the server.
This gives you the time difference between sending the packet and receiving it plus any handling time on either end.