When writing a nodejs multi-threading package there is the problem when the main thread can send content though fd:3 and threads can receive the message, but then threads cannot send anything back though fd:3
Is there something I am doing wrong? (line threader.js:45-59 is where the problem shows it's self)
Package (Only on github for now while I get the package working)
Start up code:
var Thread = require("threader");
var task = Thread.task(function(){
//Do some calculation
}, function(){
//When the calculation response has been sent
});
task('a', 2);
I just figured the problem:
thread.js is like a socket Server and threader.js is like a client.
Server has to respond with in the context of the connection.
Since you are using setTimeout which itself is a separate thread that doesn't have access to the connection context, threader is not able to listen to data.
thread.js - old code
pipe.on('data', function(chunk){
console.log('RECEIVED CONENT THOUGH fd:3 in thread');
console.log(chunk.toString());
});
setTimeout(function () {
pipe.write('I piped a thing');
}, 2000);
thread.js - new code
pipe.on('data', function(chunk){
console.log('RECEIVED CONENT THOUGH fd:3 in thread');
console.log(chunk.toString());
});
pipe.write('I piped a thing');
OR
thread.js - new code - best way
pipe.on('data', function(chunk){
console.log('RECEIVED CONENT THOUGH fd:3 in thread');
console.log(chunk.toString());
//Real 2 second work but not on a separate thread using setTimeout
pipe.write('I piped a thing');
});
I just rewrote the entire package again starting for a different angle and now it works...
I think the problem was to do with the thread picking.
The fixes will be pushed to github soon.
Related
A net.Socket object in NodeJS is a Readable Stream, however one note in the docs got me concerned:
For the Net.Socket 'data' event, the docs say
Note that the data will be lost if there is no listener when a Socket emits a 'data' event.
That seems to imply a Socket is returned to the calling script in "flowing-mode" and already un-paused? However, for a generic Readable Stream, the documentation for the 'data' event says
If you attach a data event listener, then it will switch the stream into flowing mode, and data will be passed to your handler as soon as it is available.
That "If" seems to imply if you wait a bit to bind to the 'data' event, the stream will wait for you, and if you intentionally want to miss the 'data' events, the example in the resume() method seems to indicate you must call the resume() method to start the flow of data.
My concern is that when working with a net.Server, when you receive a net.Socket as part of a 'connection' event, is it imperative that you start handling the 'data' events right away since it's already opened? Meaning if I do:
var s = new net.Server();
s.on('connection', function(socket) {
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
});
s.listen(8080);
Meaning not bind to the 'data' event right away, I could lose data? So is this a more robust way to handle incoming connections if you have a lengthy setup required for each one?
var s = new net.Server();
s.on('connection', function(socket) {
socket.pause(); // Not ready for you yet!
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
socket.resume(); // Okay, go!
});
s.listen(8080);
Anyone have experience working with listening on raw socket streams to know if this data loss is an issue?
I'm hoping this is an instance where the Net.Socket documentation wasn't updated since v0.10, since the stream documentation has a section that mentions 'data' events started emitting right away in versions prior to 0.10. Were TCP sockets properly updated to not start emitting 'data' packets right away, and the documentation not updated appropriately?
Yes, this is the docs flaw. Here is an example:
var net = require('net')
var server = net.createServer(onConnection)
function onConnection (socket) {
console.log('onConnection')
setTimeout(startReading, 1000)
function startReading () {
socket.on('data', read)
socket.on('end', stopReading)
}
function stopReading () {
socket.removeListener('data', read)
socket.removeListener('end', stopReading)
}
}
function read (data) {
console.log('Received: ' + data.toString('utf8'))
}
server.listen(1234, onListening)
function onListening () {
console.log('onListening')
net.connect(1234, onConnect)
}
function onConnect () {
console.log('onConnect')
this.write('1')
this.write('2')
this.write('3')
this.write('4')
this.write('5')
this.write('6')
}
All the data is received. If you explicitly resume() socket, you will lose it.
Also, if you do your "lengthy" setup in a blocking manner (which you shouldn't) you can't lose any IO as it has no chance to be processed, so no events will be emitted.
I have this code working in Node 0.10, but it prints nothing in 0.8
var http = require('http');
var req = http.request('http://www.google.com:80', function(res) {
setTimeout(function() {
res.pipe(process.stdout);
}, 0);
});
req.end();
After some guessing I found workaround:
var http = require('http');
var req = http.request('http://www.google.com:80', function(res) {
res.pause();
setTimeout(function() {
res.resume();
res.pipe(process.stdout);
}, 0);
});
req.end();
But documentation says, that pause is advisory and this is confuses me. Why should I pause stream, which is not connected anywhere?
0.10 revamped the Streams API and added the following change in behavior:
WARNING: If you never add a 'data' event handler, or call resume(), then it'll sit in a paused state forever and never emit 'end'.
So, in 0.10, the stream will wait for a valid listener, like a pipe, or a forced resume without an explicit pause.
Steams in 0.8 and older, on the other hand, will start sending 'data' immediately unless instructed to pause. And, in this case, that creates a race condition between the timeout and the stream -- the stream may run in part or even to completion before the timeout expires.
I'm experimenting with the close event in Node.js. I'm very new to Node.js so I'm not sure if this is a decent question or a sad one.
Documentation for close event:
http://nodejs.org/api/http.html#http_event_close_2
I want to output to the console a message if the browser is closed before the end event is reached.
While the server ran and before it got to 15 seconds, I tried closing the browser and killing the process through Chrome Tools. No message is output to the console and if I open up other connections by visiting localhost:8080 with other windows, I quickly get a 'hello' indicating my node server thinks there are at least two connections.
I'm either not understanding how to kill processes in Chrome or how the event close works.
Or if End and Close are the same - node.js https no response 'end' event, 'close' instead? - why isn't my "They got impatient!" message still ouput in the console?
How can you output to a console if the process was ended before the event end was reached?
var http = require('http'),
userRequests = 0;
http.createServer(function(request,response){
userRequests++;
response.writeHead(200, {'Content-Type' : 'text/plain'});
if ( userRequests == 1 ){
response.write('1st \n');
setTimeout ( function() {
response.write('2nd Thanks for waiting. \n');
response.on('close', function() {
console.log("They got impatient!");
});
response.end();
}, 15000);
response.write('3rd \n');
}
else {
// Quick response for all connections after first user
response.write('Hello');
response.end();
}
}).listen(8080, function() {
console.log('Server start');
});
Thank you.
First - move the event handler for the close message outside the timeout function - you're not hooking up the close handler until after your timeout expires, and probably missing the event.
Second, you never decrement userRequests anywhere; shouldn't there be a userRequests--; line somewhere? This would be throwing off your logic, since it'll always look like there's more than one request.
I was using zeroMQ in nodeJS. But it seems that while sending the data from producer to worker, if I do not put it in setInterval, then it does not send the data to the worker. My example code is as follows:
producer.js
===========
var zmq = require('zmq')
, sock = zmq.socket('push');
sock.bindSync('tcp://127.0.0.1:3000');
console.log('Producer bound to port 3000');
//sock.send("hello");
var i = 0;
//1. var timer = setInterval(function() {
var str = "hello";
console.log('sending work', str, i++);
sock.send(str);
//2. clearTimeout(timer);
//3. }, 150);
sock.on('message', function(msg) {
console.log("Got A message, [%s], [%s]", msg);
});
So in the above code, if I add back the lines commented in 1, 2 and 3, then I do receive the message to the worker side, else it does not work.
Can anyone throw light why to send message I need to put it in setInterval? Or am I doing something wrong way?
The problem is hidden in the zmq bindings for node.js . I've just spent some time digging into it and it basically does this on send():
Enqueue the message
Flush buffers
Now the problem is in the flushing part, because it does
Check if the output socket is ready, otherwise return
Flush the enqueued messages
In your code, because you call bind and immediately send, there is no worker connected at the moment of the call, because they simply didn't have enough time to notice. So the message is enqueued and we are waiting for some workers to appear. Now the interesting part - where do we check for new workers? In the send function itself! So unless we call send() later, when there are actually some workers connected, our messages are never flushed and they are enqueued forever. And that is why setInterval works, because workers have enough time to notice and connect and you periodically check if there are any.
You can find the interesting part at https://github.com/JustinTulloss/zeromq.node/blob/master/lib/index.js#L277 .
Cheers ;-)
I am trying to test my web server using nodeunit:
test.js
exports.basic = testCase({
setUp: function (callback) {
this.ws = new WrappedServer();
this.ws.run(PORT);
callback();
},
tearDown: function (callback) {
delete this.ws;
callback();
},
testFoo: function(test) {
var socket = ioClient.connect(URL);
console.log('before client emit')
socket.emit('INIT', 1, 1);
console.log('after client emit');
}
});
and this is my very simple nodejs server:
WrappedServer.prototype.run = function(port) {
this.server = io.listen(port, {'log level': 2});
this.attachCallbacks();
};
WrappedServer.prototype.attachCallbacks = function() {
var ws = this;
ws.server.sockets.on('connection', function(socket) {
ws.attachDebugToSocket(socket);
console.log('socket attaching INIT');
socket.on('INIT', function(userId, roomId) {
// do something here
});
console.log('socket finished attaching INIT');
});
}
Basically I am getting this error:
[...cts/lolol/nodejs/testing](testingServer)$ nodeunit ws.js
info - socket.io started
before client emit
after client emit
info - handshake authorized 1013616781193777373
The "sys" module is now called "util". It should have a similar interface.
socket before attaching INIT
socket finished attaching INIT
info - transport end
Somehow, the socket emits INIT BEFORE the server attaches callbacks for sockets.
Why is this happening? In addition, what's the right way to do this?
I'm assuming you were expecting the order to be this?
socket before attaching INIT
socket finished attaching INIT
before client emit
after client emit
From the small amount of code given, the issue is probably two things.
First, and probably the main issue, is that your ioClient.connect will not connect immediately. You need to pass some kind of callback to that, and emit INIT, and then execute the test's callback function once it has actually connected.
Second, you should probably do the same thing with you run command. listen will not stary listening immediately, so you're going to get inconsistent results occasionally if it hasn't started listening by the time it executes your test. You should also pass the setUp's callback to io.listen.
Update
To be clear for listen, just like most things in node, the socketio server's listen method is asynchronous. Calling the method tells it to start listening, but there is some time in the background where the server sets up the networking stuff to start listening. Just like node's core listen, http://nodejs.org/docs/latest/api/net.html#server.listen, socket.io's version takes a callback argument that is called once the server is up and listening.
io.listen(port, {'log level': 2}, callback);
Unless socket.io starts giving you errors about failing to connect, this probably is not an issue, but it is something to keep in mind. Treating asynchronous actions as if they were instantaneous is an easy way to make bugs that only come up occasionally. Since your run wraps listen, I think in general, not just for testing, passing a callback to run would be a very good idea.