I am using node.js to communicate over a socket in json. One of the operations sends a big file over the socket in base64 format. As a result the message is split into packets and triggers various data events on the client. I handle this using a buffer on the client side
var response={
file: fs.readFileSync("FileName","base64")
}
socket.write(JSON.stringify(response));
On the client side, using a buffer
cleartextStream.on("data",function(data){
console.log("Received data")
try {
data=JSON.parse(data);
_buffer="";
events.trigger(data.event,data);
}
catch(e){
console.log("Incomplete message received. Building Buffer");
_buffer+=data;
if(/\}$/.test(data) ){//Tests for json boundary
data=JSON.parse(_buffer);
events.trigger(data.event,data);
}
}
This works fine for me for now. The only problem I anticipate is if while the file is being sent, some other event triggers a write on the socket.
t=0 Start sending file
t=5 Still sending file
t=6 Another event uses socket.write to start sending message
t=7 still sending file
t=8 Another event sending message
This will result in garbled messages. So does socket.write block while sending a single message, or will it allow other methods to use socket.write even before the transmission is completed?
Related
badStream.pipe(res)
When badStream throws an error, the response is not terminating and the request in the browser is stuck in a pending state.
badStream.on(error, function() {this.end()}).pipe(res)
I've tried the above to no avail. What's the proper way to handle the error in this case? Thanks for any help.
In nodejs, an error on the readstream that is piped to the http response stream just unpipes it from the response stream, but does not otherwise do anything to the response stream it was piped to. That leaves it hanging as an open socket with the browser still waiting for it to finish (as you observed). As such, you have to manually handle the error and do something to the target stream.
badStream.pipe(res);
badStream.on('error', err => {
// log the error and prematurely end the response stream
console.log(err);
res.end();
});
Because this is an http response and you are already in the middle of sending the http response body and thus the http status and headers have already been sent, there aren't a lot of things you can do in the middle of sending the response body.
Ultimately, you're going to have to call res.end() to terminate the response so the browser knows the request is done. If there's a content-length header on this response (the length was known ahead of time), then just terminating the response stream before it's done will cause the browser to see that it didn't get the whole response and thus know that something went wrong.
If there's no content-length on the response, then it really depends upon what type of data you're sending. If you're just sending text, then the browser probably won't know there's an error because the text response will just end. If it's human readable text, you could send "ERROR, ERROR, ERROR - response ended prematurely" or some visible text marker so perhaps a human might recognize that the response is incomplete.
If it's some particular format data such as JSON or XML or any multi-part response, then hanging up the socket prematurely will probably lead to a parsing error that the client will notice. Unfortunately, http just doesn't really make provisions for mid-response errors so it's left to the individual applications to detect and handle.
FYI, here's a pretty interesting article that covers a lot about error handling with streams. And, note that using stream.pipeline() instead of .pipe() also does a lot more complete error handling, including giving you one single callback that will get called for an error in either stream and it will automatically call .destroy() on all streams. In many ways, stream.pipeline(src, dest) is meant to replace src.pipe(dest).
I am sending data from a Google Chrome Extension to a native application developed in C#, however, this works only when I send short text messages but this does not when data is binary (JSON encoded).
For example, to send a command, I am calling, from the extension:
port.postMessage({ command: 'Rates', info: request.setRates });
where port is the native messaging application connection and request.setRates is {setRates: "framesPerSecond=15&audioBitrate=22050"}
That is received perfectly through STDIN in the C# application.
However, if I call this instruction:
port.postMessage(request.binaryStream);
Where request.binaryStream is {"data":[26,69,223,163,163,66,134,129,1,66,247,129…122,235,208,2,56,64,163],"contentType":"x-media"}.
That is badly received in the native application. Data length is about 77 KB and in that case, I received things like: 0,215,214,171,175,125,107,235,95,94,250,215,215,190,181,........, and of course it is an invalid JSON string. It seems that some kind of buffer overrun is being produced.
How can this be done?
EDIT:
Last attempt at the moment is to base64 encode the array:
mediaRecorder.ondataavailable = function (e) {
e.data.arrayBuffer().then(buffer => {
chrome.runtime.sendMessage(appId, { binaryStream: Uint8ToBase64(new Uint8Array(buffer)) }, function (response) {
console.log(response);
});
stopStream();
});
With that, this is sent to the native application (one immediately after the other):
{command: "Binary", data: "GkXfo6NChoEBQveBAULygQRC84EIQoKIbWF0cm9za2FCh4EEQo…ddddddddddddddddddddddddddddddddddddddddddddeow=="}
{command: "Binary", data: "QgaBAACA+4O2l3/8ZtVmH2JXfcZSfs+ulepr+aF2U5d+kW0SDu…fgBgDv16uSH4AY6Q9YA4dcvrl9cvrl9cvrl9coHrr0AI4QA=="}
And this is received in the native application:
^ {"command":"Binary","data":"QgaBAACA+4O2l3/8ZtVmH2JXfcZSfs+ulepr+aF2U5d+kW0SDuRqP9n9baILWx2vK/6vraUaEqNo9Tf7htznm8o72wjRTzgjZFyfSf+k4BZDp9luH6Un1JWAhbNem.........ddddddddddddddddddddddddddddddddddddddddddeow=="}
Notice that the received data is the second sent data, but the end of that received data is the end of the first sent data. So, the buffer overrun maybe is correct. Any solution to this? I had this same program using TCP Sockets and it works, but now, I need to use Native Messaging. Is the STDIN buffer very small?
Jaime
For examples like this:
https://bravenewmethod.com/2011/02/21/node-js-tls-client-example/
Or in my own code:
client = tls.connect(port, host, tlsOptions, function() {
}
client.on('end', function(data) {
}
When do these lifecycle methods get actually called? In the documentation, https://nodejs.org/api/tls.html, I don't see anything about it.
You need to be looking in the doc for the Net module for a TCP socket which a TLS socket inherits from. tls.TLSSocket is a subclass of net.Socket. This is a common issue with documentation for a class hierarchy where you don't realize that lots of things are documented in the base class documentation. In that doc, it says this for the end event:
Emitted when the other end of the socket sends a FIN packet, thus
ending the readable side of the socket.
By default (allowHalfOpen is false) the socket will send a FIN packet
back and destroy its file descriptor once it has written out its
pending write queue. However, if allowHalfOpen is set to true, the
socket will not automatically end() its writable side, allowing the
user to write arbitrary amounts of data. The user must call end()
explicitly to close the connection (i.e. sending a FIN packet back).
For the close event, that same doc says this:
Emitted once the socket is fully closed. The argument had_error is a
boolean which says if the socket was closed due to a transmission
error.
This means that the close event comes after the end event since the socket may be still at least partially open when the end event is received.
So, you will get end when the other side has told you it is no longer accepting data (receipt of FIN packet) and you will get close when the socket is now completely closed.
The tls.TLSSocket class is an instance of the net.Socket class. You can find additional information about the events and methods it has in that documentation (https://nodejs.org/api/net.html#net_class_net_socket). Most likely, net.Socket#end if I had to guess.
I am trying to stream some audio to my server and then stream it to a service specified by the user, the user will be providing me with someHostName, which can sometimes not support that type of request.
My problem is that when it happens the clientRequest.on('end',..) is never fired, I think it's because it's being piped to someHostReq which gets messed up when someHostName is "wrong".
My question is:
Is there anyway that I can still have clientRequest.on('end',..) fired even when the stream clientRequest pipes to has something wrong with it?
If not: how do I detect that something wrong happened with someHostReq "immediately"? someHostReq.on('error') doesn't fire up except after some time.
code:
someHostName = 'somexample.com'
function checkIfPaused(request){//every 1 second check .isPaused
console.log(request.isPaused()+'>>>>');
setTimeout(function(){checkIfPaused(request)},1000);
}
router.post('/', function (clientRequest, clientResponse) {
clientRequest.on('data', function (chunk) {
console.log('pushing data');
});
clientRequest.on('end', function () {//when done streaming audio
console.log('im at the end');
}); //end clientRequest.on('end',)
options = {
hostname: someHostName, method: 'POST', headers: {'Transfer-Encoding': 'chunked'}
};
var someHostReq = http.request(options, function(res){
var data = ''
someHostReq.on('data',function(chunk){data+=chunk;});
someHostReq.on('end',function(){
console.log('someHostReq.end is called');
});
});
clientRequest.pipe(someHostReq);
checkIfPaused(clientRequest);
});
output:
in the case of a correct hostname:
pushing data
.
.
pushing data
false>>>
pushing data
.
.
pushing data
pushing data
false>>>
pushing data
.
.
pushing data
console.log('im at the end');
true>>>
//continues to be true, that's fine
in the case of a wrong host name:
pushing data
.
.
pushing data
false>>>>
pushing data
.
.
pushing data
pushing data
false>>>>
pushing data
.
.
pushing data
true>>>>
true>>>>
true>>>>
//it stays true and clientRequest.on('end') is never called
//even tho the client is still streaming data, no more "pushing data" appears
if you think my question is a duplicate:
it's not the same as this: node.js http.request event flow - where did my END event go? , the OP was just making a GET instead of a POST
it's not the same as this: My http.createserver in node.js doesn't work? , the stream was in paused mode because the none of the following happened:
You can switch to flowing mode by doing any of the following:
Adding a 'data' event handler to listen for data.
Calling the resume() method to explicitly open the flow.
Calling the pipe() method to send the data to a Writable.
source: https://nodejs.org/api/stream.html#stream_class_stream_readable
it's not the same as this: Node.js response from http request not calling 'end' event without including 'data' event , he just forgot to add the .on('data',..)
The behaviour in case of a wrong host name seems some problem with buffers, if the destination stream buffer is full (because someHost is not getting the sended chunks of data) the pipe will not continue to read the origin stream because pipe automatically manage the flow. As pipe is not reading the origin stream you never reach 'end' event.
Is there anyway that I can still have clientRequest.on('end',..) fired
even when the stream clientRequest pipes to has something wrong with
it?
The 'end' event will not fire unless the data is completely consumed. To get 'end' fired with a paused stream you need to call resume() (unpiping first from wrong hostname or you will fall in buffer stuck again) to set the steam into flowMode again or read() to the end.
But how to detect when I should do any of the above?
someHostReq.on('error') is the natural place but if it takes too long to fire up:
First try to set a low timeout request (less than someHostReq.on('error') takes to trigger, as seems too much time for you) request.setTimeout(timeout[, callback]) and check if it doesn't fail when correct hostname. If works, just use the callback or timeout event to detect when the server timeOut and use one of the techniques above to reach to the end.
If timeOut solution fails or doesn't fits your requirements you have to play with flags in clientRequest.on('data'), clientRequest.on('end') and/or clienteRequest.isPaused to guess when you are stuck by the buffer. When you think you are stuck just apply one of the techniques above to reach to the end of the stream. Luckily it takes less time to detect buffer stuck than wait for someHostReq.on('error') (maybe two request.isPaused() = true without reach 'data' event is enought to determine if you are stuck).
How do I detect that something wrong happened with someHostReq
"immediately"? someHostReq.on('error') doesn't fire up except after
some time.
Errors triggers when triggers. You can not "immediately" detect it. ¿Why not just send a prove beacon request to check support before piping streams? Some kind of:
"Cheking service specified by the user..." If OK -> Pipe user request stream to service OR FAIL -> Notify user about wrong service.
I'm trying to read some file by opening read stream and send chunks of the file through ZMQ to another process to consume them. The stream is working like it should, however when I start the worker, it doesn't see the data that's been sent.
I tried sending data through socket every 500ms, not in a callback, and when I start the worker it collects all previous chunks of data:
sender = zmq.socket('push')
setInterval(() ->
console.log('sending work');
sender.send('some work')
, 500)
receiver = zmq.socket("pull")
receiver.on "message", (msg) ->
console.log('work is here: %s', msg.toString())
Outputs:
sending work
sending work
sending work
sending work
sending work
// here I start the worker
sending work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
sending work
work is here: some work
sending work
work is here: some work
sending work
work is here: some work
So, when the worker starts, it begins with pulling all the previous data and then it pulls it every time sth new comes in. This does not apply when I do this:
readStream = fs.createReadStream("./data/pg2701.txt", {'bufferSize': 100 * 1024})
readStream.on "data", (data) ->
console.log('sending work');
sender.send('some work'); // I'd send 'data' if it worked..
In this scenario, the worker doesn't pull any data at all.
Are those kind of sockets supposed to create a queue or not? What am I missing here?
Yes, push socket is blocking until HWM is reached, and there's nobody to send to.
Maybe the sender hasn't bound yet, try something like this:
sender.bind('address', function(err) {
if (err) throw err;
console.log('sender bound!');
// the readStream code.
}
also a connect is missing from your code example, I bet it's there, but maybe you forgot it.