I see the headers present in both the request and the response
Sec-WebSocket-Extensions:permessage-deflate
and yet, the data sent over the socket is still the same( verified for large and small size data).
Please clarify the following for me:
Data sent in the frames of the socket should be compressed right?
Will i be able to see the compressed data on the developer tools of chrome under the frames tab of the socket ?
Yes, unless the original message size is below 1024 bytes (which is the default threshold that engine.io uses to determine if a message should be compressed or not).
It doesn't look like it (it's a protocol option so I think that Chrome will perform the decompression transparently, just like compressed HTTP responses).
Related
Is there a way to log or view the actual bytestream being sent to the server when using either the grpc or #grpc/grpc-js clients in NodeJS?
I'm working with an opaque GRPC server that accepts my bytes when I stream them, but doesn't do what it's supposed to do. I'd like to view the actual bytes being sent to the server, as we suspect it's a problem with how the GRPC libraries are serializing 64 bit integers.
The GRPC_VERBOSITY=debug GRPC_TRACE=tcp,http,api,http2_stream_state env variables for the native grpc module haven't been helpful in this specific case -- they show part of one byte stream, but not the full byte-stream.
Even a "here's the place in the code where the serialization happens" would be useful.
The GRPC_VERBOSITY setting there is correct. If you are using TLS, you can see all of the data that is sent and received with GRPC_TRACE=secure_endpoint. If you are using plaintext connections, you can instead see it with GRPC_TRACE=tcp. In both cases, you will need to pick the data you are looking for out of the HTTP/2 framing, and it may show compressed messages, which would be essentially impossible to interpret.
Alternatively, if your setup allows it, you may want to try Wireshark. It should be able to handle the HTTP/2 framing for you, and I believe it has plugins to handle gRPC traffic specifically.
I'm writing an audio streaming server - similar to Icecast, and I'm running into a problem with streaming audio files. Proxying audio works fine (an audio source connects and sends audio in real time, which is then transmitted to clients over HTTP), but when I try to stream an audio file it goes by to quickly - clients end up with the entire audio file within their local buffer. I want them to only have a few 10s of seconds in their local buffer.
Essentially, how can I slow down the sending of an audio file over HTTP?
The files are all MP3. I've managed to get it pretty much working by experimenting with hardcoded thread delays etc... but that's not a sustainable solution.
If you're sticking with http you could use chunked transfer encoding and delay sending the packets/chunks. This would indeed be something similar to hardcoded thread::sleep but you could use an event loop to determine when to send the next chunk instead of pausing the thread.
You might run into timing issues though, maybe your sleep logic is causing longer delays than the runtime of the song. YouTube has similar logic to what you're talking about. It looks like they break videos into multiple http requests and the frontend client requests a new chunk when the buffer is too small. Breaking the file into multiple http body requests and then reassembling them at the client might have the characteristics you're looking for.
You could simply implement the http Range header and allow the client to only request a specific Range of the mp3 file. https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
The easiest method (by far) would be to have the client request chunks of the audio file on demand. std::net::TcpStream (which is what you said you're using) doesn't have a method to throttle the transfer rate, so you don't have many options to limit streaming backend short of using hard-coded thread delays.
As an example, you can have your client store a segment of audio, and when the user listening to the audio reaches a certain point before the end of the segment (or skips ahead), the client makes a request to the server to fetch the relevant segment.
This is similar to how real-world streaming services (like Youtube) work, because as you said, it would be a bad idea to store the entire file client-side.
So WebRTC uses UDP and it works great if you are doing some video streaming, if you lose few frames that's ok but i wonder how that works when sending files like pictures.
The main problem being that UDP does not verify file integrity like TCP does and just by missing a packet you could end up with a corrupted file.
So how can you send pictures reliable between browsers and ensure that the files are whole?
you can use the datachannel to transfer files. They provide an abstraction providing reliable transfer. See https://webrtc.github.io/samples/src/content/datachannel/filetransfer/ for a sample.
I'm trying to reduce socket.io bandwidth when using websockets. I switched to binary data, but looking in the browser developer console, the packets are sent as:
[ 'type of the packet (first argument of .emit)', associated data ]
I'm using only one packet type, so this causes unnecessary overhead - useless bytes are sent and whole thing is json encoded for no reason.
How can I get rid of the packet type and just send raw data?
socket.io is an abstraction on top of webSocket. In order to support the features it provides, it adds some overhead to the messages. The message name is one such piece of that overhead since it is a messaging system, not just a packet delivery system.
If you want to squeeze all bytes out of the transport, then you probably need to get rid of socket.io and just use a plain webSocket where you control more of the contents of each packet (though you will have to reimplement some things that socket.io does for you).
With socket.io in node.js, you can send binary by sending an ArrayBuffer or Buffer. In the browser, you can send binary by sending an ArrayBuffer or Blob.
I'm working actually on a file transfer system using socket.io and HTML5 file API.
https://github.com/xblaster/Nodjawnloader (stable branch)
The main problem I have is for huge file. Socket.io send me all packets in one huge transfer chunk and the Google Chrome javascript VM just crash when it receive around 70MB of packets.
Can I limit socket.io chunks for xhr-poll or jsonp calls ?
There isn't a mechanism for limiting the packet size on XHR or JSONP, but there is nothing stopping you from splitting up the file yourself and sending chunks. Then you can reassemble on the client side.