ZeroMQ push/pull and nodejs read stream - node.js

I'm trying to read some file by opening read stream and send chunks of the file through ZMQ to another process to consume them. The stream is working like it should, however when I start the worker, it doesn't see the data that's been sent.
I tried sending data through socket every 500ms, not in a callback, and when I start the worker it collects all previous chunks of data:
sender = zmq.socket('push')
setInterval(() ->
console.log('sending work');
sender.send('some work')
, 500)
receiver = zmq.socket("pull")
receiver.on "message", (msg) ->
console.log('work is here: %s', msg.toString())
Outputs:
sending work
sending work
sending work
sending work
sending work
// here I start the worker
sending work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
work is here: some work
sending work
work is here: some work
sending work
work is here: some work
sending work
work is here: some work
So, when the worker starts, it begins with pulling all the previous data and then it pulls it every time sth new comes in. This does not apply when I do this:
readStream = fs.createReadStream("./data/pg2701.txt", {'bufferSize': 100 * 1024})
readStream.on "data", (data) ->
console.log('sending work');
sender.send('some work'); // I'd send 'data' if it worked..
In this scenario, the worker doesn't pull any data at all.
Are those kind of sockets supposed to create a queue or not? What am I missing here?

Yes, push socket is blocking until HWM is reached, and there's nobody to send to.
Maybe the sender hasn't bound yet, try something like this:
sender.bind('address', function(err) {
if (err) throw err;
console.log('sender bound!');
// the readStream code.
}
also a connect is missing from your code example, I bet it's there, but maybe you forgot it.

Related

typescript fetch response streaming

i am trying to stream a response. But i want to be able to read the response (and work with the data) while it is still being sent. I basically want to send multiple messages in one response.
It works internally in node.js, but when i tried to do the same thing in typescript it doesnt work anymore.
My attempt was to do the request via fetch in typescript and the response is coming from a node.js server by writing parts of the response on the response stream.
fetch('...', {
...
}).then((response => {
const reader = response.body.getReader();
reader.read().then(({done, value}) => {
if (done) {
return response;
}
console.log(String.fromCharCode.apply(null, value)); //just for testing purposes
})
}).then(...)...
On the Node.js side it basically looks like this:
// doing stuff with the request
response.write(first_message)
// do some more stuff
response.write(second_message)
// do even more stuff
response.end(last_message)
In Node.js, like i said, i can just read every message once its sent via res.on('data', ...), but the reader.read in typescript only triggers(?) once and that is when the whole response was sent.
Is there a way to make it work like i want, or do i have to look for another way?
I hope it is kinda understandable what i want to do, i noticed while writing this how much i struggled explaining this :D
I found the problem, and as usual it was sitting in front of the pc.
I forgot to write a header first, before writing the response.

Node.js: Race condition when receiving data on tcp socket

I'm using the net library of Node.js to conect to a server that is publishing data. So I'm listening for 'data'-events on client side. When the data-event is fired, I append the received data to my rx-buffer and check if we got a complete message by reading some bytes. If I got a valid message, I remove the message from the buffer and process it. The source code looks like:
rxBuffer = ''
client.on('data', (data) => {
rxBuffer += data
// for example... 10 stores the message length...
while (rxBuffer.length > 10 && rxBuffer.length >= (10 + rxBuffer[10])) {
const msg = rxBuffer.slice(0, 10 + rxBuffer[10])
rxBuffer = rxBuffer.slice(0, msg.length) // remove message from buffer
processMsg(msg) // process message..
}
})
As far as I know that the typical way. But... what happens if the data event fired multiple times? So, imagine I'm getting a data event and while I append the data to my rx-buffer I'm getting the next data event. So the "new" data event will also append the data to the rxBuffer and starts my while-loop. So I've two handlers that are processing the same messages because they share the same rx-buffer. Is this correct?
How can I handle this? In other languages I'd say use something like a mutex to prevent multiple access to the rx-buffer... but what's the solution forjs?!?! Or maybe I'm wrong and I'm never getting multiple data-events while one event is still active? Any ideas?
JavaScript is single threaded. The second event will not run until the first one either completes or blocks, the latter of which could presumably happen in your processMsg(). If that's the case, multiple executions of processMsg() could be interleaved. If they aren't changing any global data (rxBuffer included), then you shouldn't have a problem.

How can I simulate latency in Socket.io?

Currently, I'm testing my Node.js, Socket.io server on localhost and on devices connected to my router.
For testing purposes, I would like to simulate a delay in sending messages, so I know what it'll be like for users around the world.
Is there any effective way of doing this?
If it's the messages you send from the server that you want to delay, you can override the .emit() method on each new connection with one that adds a short delay. Here's one way of doing that on the server:
io.on('connection', function(socket) {
console.log("socket connected: ", socket.id);
// override the .emit() method
const emitFn = socket.emit
socket.emit = (...args) => setTimeout(() => {
emitFn.apply(socket, args)
}, 1000)
// rest of your connection handler here
});
Note, there is one caveat with this. If you pass an object or an array as the data for socket.emit(), you will see that this code does not make a copy of that data so the data will not be actually used until the data is sent (1 second from now). So, if the code doing the sending actually modifies that data before it is sent one second from now, that would likely create a problem. This could be fixed by making a copy of the incoming data, but I did not add that complexity here as it would not always be needed since it depends upon how the caller's code works.
An old but still popular question. :)
You can use either "iptables" or "tc" to simulate delays/dropped-packets. See the man page for "iptables" and look for 'statistic'. I suggest you make sure to specify the port or your ssh session will get affected.
Here are some good examples for "tc":
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

readable.on('end',...) is never fired

I am trying to stream some audio to my server and then stream it to a service specified by the user, the user will be providing me with someHostName, which can sometimes not support that type of request.
My problem is that when it happens the clientRequest.on('end',..) is never fired, I think it's because it's being piped to someHostReq which gets messed up when someHostName is "wrong".
My question is:
Is there anyway that I can still have clientRequest.on('end',..) fired even when the stream clientRequest pipes to has something wrong with it?
If not: how do I detect that something wrong happened with someHostReq "immediately"? someHostReq.on('error') doesn't fire up except after some time.
code:
someHostName = 'somexample.com'
function checkIfPaused(request){//every 1 second check .isPaused
console.log(request.isPaused()+'>>>>');
setTimeout(function(){checkIfPaused(request)},1000);
}
router.post('/', function (clientRequest, clientResponse) {
clientRequest.on('data', function (chunk) {
console.log('pushing data');
});
clientRequest.on('end', function () {//when done streaming audio
console.log('im at the end');
}); //end clientRequest.on('end',)
options = {
hostname: someHostName, method: 'POST', headers: {'Transfer-Encoding': 'chunked'}
};
var someHostReq = http.request(options, function(res){
var data = ''
someHostReq.on('data',function(chunk){data+=chunk;});
someHostReq.on('end',function(){
console.log('someHostReq.end is called');
});
});
clientRequest.pipe(someHostReq);
checkIfPaused(clientRequest);
});
output:
in the case of a correct hostname:
pushing data
.
.
pushing data
false>>>
pushing data
.
.
pushing data
pushing data
false>>>
pushing data
.
.
pushing data
console.log('im at the end');
true>>>
//continues to be true, that's fine
in the case of a wrong host name:
pushing data
.
.
pushing data
false>>>>
pushing data
.
.
pushing data
pushing data
false>>>>
pushing data
.
.
pushing data
true>>>>
true>>>>
true>>>>
//it stays true and clientRequest.on('end') is never called
//even tho the client is still streaming data, no more "pushing data" appears
if you think my question is a duplicate:
it's not the same as this: node.js http.request event flow - where did my END event go? , the OP was just making a GET instead of a POST
it's not the same as this: My http.createserver in node.js doesn't work? , the stream was in paused mode because the none of the following happened:
You can switch to flowing mode by doing any of the following:
Adding a 'data' event handler to listen for data.
Calling the resume() method to explicitly open the flow.
Calling the pipe() method to send the data to a Writable.
source: https://nodejs.org/api/stream.html#stream_class_stream_readable
it's not the same as this: Node.js response from http request not calling 'end' event without including 'data' event , he just forgot to add the .on('data',..)
The behaviour in case of a wrong host name seems some problem with buffers, if the destination stream buffer is full (because someHost is not getting the sended chunks of data) the pipe will not continue to read the origin stream because pipe automatically manage the flow. As pipe is not reading the origin stream you never reach 'end' event.
Is there anyway that I can still have clientRequest.on('end',..) fired
even when the stream clientRequest pipes to has something wrong with
it?
The 'end' event will not fire unless the data is completely consumed. To get 'end' fired with a paused stream you need to call resume() (unpiping first from wrong hostname or you will fall in buffer stuck again) to set the steam into flowMode again or read() to the end.
But how to detect when I should do any of the above?
someHostReq.on('error') is the natural place but if it takes too long to fire up:
First try to set a low timeout request (less than someHostReq.on('error') takes to trigger, as seems too much time for you) request.setTimeout(timeout[, callback]) and check if it doesn't fail when correct hostname. If works, just use the callback or timeout event to detect when the server timeOut and use one of the techniques above to reach to the end.
If timeOut solution fails or doesn't fits your requirements you have to play with flags in clientRequest.on('data'), clientRequest.on('end') and/or clienteRequest.isPaused to guess when you are stuck by the buffer. When you think you are stuck just apply one of the techniques above to reach to the end of the stream. Luckily it takes less time to detect buffer stuck than wait for someHostReq.on('error') (maybe two request.isPaused() = true without reach 'data' event is enought to determine if you are stuck).
How do I detect that something wrong happened with someHostReq
"immediately"? someHostReq.on('error') doesn't fire up except after
some time.
Errors triggers when triggers. You can not "immediately" detect it. ¿Why not just send a prove beacon request to check support before piping streams? Some kind of:
"Cheking service specified by the user..." If OK -> Pipe user request stream to service OR FAIL -> Notify user about wrong service.

How to iterate on each record of a Model.stream waterline query?

I need to do something like:
Lineup.stream({foo:"bar"}).exec(function(err,lineup){
// Do something with each record
});
Lineup is a collection with over 18000 records so I think using find is not a good option. What's the correct way to do this? From docs I can't figure out how to.
The .stream() method returns a node stream interface ( a read stream ) that emits events as data is read. Your options here are either to .pipe() to something else that can take "stream" input, such as the response object of the server, or to attach an event listener to the events emitted from the stream. i.e:
Piped to response
Lineup.stream({foo:"bar"}).pipe(res);
Setup event listeners
var stream = Lineup.stream({foo:"bar"});
stream.on("data",function(data) {
stream.pause(); // stop emitting events for a moment
/*
* Do things
*/
stream.resume(); // resume events
});
stream.on("err",function(err) {
// handle any errors that will throw in reading here
});
The .pause() and .resume() are quite inportant as otherwise things within the processing just keep responding to emitted events before that code is complete. While fine for small cases, this is not desirable for larger "streams" that the interface is meant to be used for.
Additionally, if you are calling any "asynchronous" actions inside the event handler like this, then you need to take care to .resume() within the callback or promise resolution , thus waiting for that "async" action to complete itself.
But look at the "node documentation" linked earlier for more in depth information on "stream".
P.S I believe the following syntax should also be supported if it suits your sensibilities better:
var stream = Lineup.find({foo:"bar"}).stream();

Resources