I'm working actually on a file transfer system using socket.io and HTML5 file API.
https://github.com/xblaster/Nodjawnloader (stable branch)
The main problem I have is for huge file. Socket.io send me all packets in one huge transfer chunk and the Google Chrome javascript VM just crash when it receive around 70MB of packets.
Can I limit socket.io chunks for xhr-poll or jsonp calls ?
There isn't a mechanism for limiting the packet size on XHR or JSONP, but there is nothing stopping you from splitting up the file yourself and sending chunks. Then you can reassemble on the client side.
Related
I'm writing an audio streaming server - similar to Icecast, and I'm running into a problem with streaming audio files. Proxying audio works fine (an audio source connects and sends audio in real time, which is then transmitted to clients over HTTP), but when I try to stream an audio file it goes by to quickly - clients end up with the entire audio file within their local buffer. I want them to only have a few 10s of seconds in their local buffer.
Essentially, how can I slow down the sending of an audio file over HTTP?
The files are all MP3. I've managed to get it pretty much working by experimenting with hardcoded thread delays etc... but that's not a sustainable solution.
If you're sticking with http you could use chunked transfer encoding and delay sending the packets/chunks. This would indeed be something similar to hardcoded thread::sleep but you could use an event loop to determine when to send the next chunk instead of pausing the thread.
You might run into timing issues though, maybe your sleep logic is causing longer delays than the runtime of the song. YouTube has similar logic to what you're talking about. It looks like they break videos into multiple http requests and the frontend client requests a new chunk when the buffer is too small. Breaking the file into multiple http body requests and then reassembling them at the client might have the characteristics you're looking for.
You could simply implement the http Range header and allow the client to only request a specific Range of the mp3 file. https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
The easiest method (by far) would be to have the client request chunks of the audio file on demand. std::net::TcpStream (which is what you said you're using) doesn't have a method to throttle the transfer rate, so you don't have many options to limit streaming backend short of using hard-coded thread delays.
As an example, you can have your client store a segment of audio, and when the user listening to the audio reaches a certain point before the end of the segment (or skips ahead), the client makes a request to the server to fetch the relevant segment.
This is similar to how real-world streaming services (like Youtube) work, because as you said, it would be a bad idea to store the entire file client-side.
I see the headers present in both the request and the response
Sec-WebSocket-Extensions:permessage-deflate
and yet, the data sent over the socket is still the same( verified for large and small size data).
Please clarify the following for me:
Data sent in the frames of the socket should be compressed right?
Will i be able to see the compressed data on the developer tools of chrome under the frames tab of the socket ?
Yes, unless the original message size is below 1024 bytes (which is the default threshold that engine.io uses to determine if a message should be compressed or not).
It doesn't look like it (it's a protocol option so I think that Chrome will perform the decompression transparently, just like compressed HTTP responses).
I'm trying to reduce socket.io bandwidth when using websockets. I switched to binary data, but looking in the browser developer console, the packets are sent as:
[ 'type of the packet (first argument of .emit)', associated data ]
I'm using only one packet type, so this causes unnecessary overhead - useless bytes are sent and whole thing is json encoded for no reason.
How can I get rid of the packet type and just send raw data?
socket.io is an abstraction on top of webSocket. In order to support the features it provides, it adds some overhead to the messages. The message name is one such piece of that overhead since it is a messaging system, not just a packet delivery system.
If you want to squeeze all bytes out of the transport, then you probably need to get rid of socket.io and just use a plain webSocket where you control more of the contents of each packet (though you will have to reimplement some things that socket.io does for you).
With socket.io in node.js, you can send binary by sending an ArrayBuffer or Buffer. In the browser, you can send binary by sending an ArrayBuffer or Blob.
I'm currently studying for a project involving a web-server and some raspberrys.
The challenge would basically be to watch raspberry "status" on a web interface.
Raspberrys are connected to the Internet through a GSM connection (mostly 3G).
I'm developping using Node.js on both the clients and the server, and I'd like to use websockets through socket.io in order to watch the raspberry connection status (actually, this is more like watching the raspberry capability to upload data through my application), dealing with "connected" and "disconnected" events.
Would an always-alive websocket connection be reliable for such a use-case?
Are websocket designed-to (or reliable-for) staying opened?
Since this is a hard-testable situation, does anyone know a data-consumption estimate for an always-alive websocket?
If I'm going in a wrong way, does anyone ever worked on such a use-case via another reliable way?
Would an always-alive WebSocket connection be reliable for such a use-case ? Are WebSocket designed-to (or reliable-for) staying opened ?
Yes, WebSocket was designed to stay open and yes it's reliable for your use-case, a WebSocket connection is just a TCP connection which transmits data in frames.
Since this is a hard-testable situation, does anyone know an data-consumption estimate for an always-alive websocket ?
As I wrote, data in WebSocket connections is transmitted using frames, each frame has a header and the payload. The data sent from the client to the server is always masked and like this adds 4 bytes (the masking key) to each frame. The length of the header depends on the payload length:
2 bytes header for <=125 bytes payload
4 bytes header for <=65535 bytes payload
10 bytes header for <=2^64-1 bytes payload (rarely used)
Base Framing Protocol: https://www.rfc-editor.org/rfc/rfc6455#section-5.2
To keep the connection open, the server sends at a specific timeout (depends on the implementation, usually ~30 seconds) ping frames which are 2-127 bytes long, usually 2 bytes (only the header, without payload) and the client responds with pong frames which are also 2-127 bytes long.
I am developing a node.js service that will have a number of requests per second - say 1000. Lets imagine that the response data weights a bit, connection with our clients is extremely slow and it takes ~1s for the response to be sent back to the client.
Question #1 - I imagine if there was no proxy buffering, it would take node.js 1000 seconds to send back all responses as this is blocking operation, isn't it?
Question #2 - How nginx buffers (and buffers in general) work? Would I be able to receive all 1000 responses to buffer (provided RAM is not a problem) and only then flush them the clients? What are the limits of proxy_buffers? Can I set a number of buffers to 1000 1K each?
The goal is to flush all the responses out of the node.js as soon as possible in order not to block it and have some other system to deliver them.
Thanks!
Of course, sending the response is non-blocking operation. Node simply gives a chunk to a network driver, leaving all the other work to your OS.
If sending the response was a blocking operation, it would only take a single PC with its network artificially crippled down to DoS any node-based service.