What I want?
Whenever the REQ connects with requester.connect() call to that host:port, ROUTER should detect this event and do something i.e. in this case cluster.fork();
I tried:
//Start Evented Child Node Processes alongwith a Request REQ Each
router.on("accept", function funcCB (fileDesc, endPt) {
//fire up a cluster fork for handling Requests
console.log("starting a new FORK process");
cluster.fork();
});
What router.on(<event>) should actually detect a
requester.connect()
??
As per this here these are the monitor events available for node zmq:
These are the triggered events as per the doc there:
connect - ZMQ_EVENT_CONNECTED
connect_delay - ZMQ_EVENT_CONNECT_DELAYED
connect_retry - ZMQ_EVENT_CONNECT_RETRIED
listen - ZMQ_EVENT_LISTENING
bind_error - ZMQ_EVENT_BIND_FAILED
accept - ZMQ_EVENT_ACCEPTED
accept_error - ZMQ_EVENT_ACCEPT_FAILED
close - ZMQ_EVENT_CLOSED
close_error - ZMQ_EVENT_CLOSE_FAILED
disconnect - ZMQ_EVENT_DISCONNECTED
How it works?
The beauty of ZeroMQ is in it's architecture. This means, the abstract scaleable archetype primitives ( PUB / SUB, PAIR, XREQ ) do exactly what these have been defined for.
The clean architecture separates I/O-thread(s) from socket's-entry-gate "behaviour" and keeps all the dirty stuff down there::
This said, it is of no use say I want ROUTER to detect and handle also this and that, if it was not defined to do this right in the ZeroMQ architecture.
How to do it?
The simplest approach to this and similar need is to design one's own composite element, let's for the simplicity sketch it as [[ROUTER]+[SUB]], where the node has both the [ROUTER]-trafic-oriented behaviour and also keeps a [SUB]-signalling-receiving behaviour, exposed to outer world via SUB.bind() at another host:portSIG
This way, remote processes REQ.connect( host:port) and PUB.connect( host:portSIG ) and operate on both the transport-plane and the signalling-plane as your design needs and implements.
ZeroMQ is a lovely can-do LEGO-toolbox.
Enjoy these powers.
So the answer was this small addition as posted by #Jason here on the SO comments section
//start the socket ROUTER monitor
router.monitor(500, 0);
Related
Currently, I'm testing my Node.js, Socket.io server on localhost and on devices connected to my router.
For testing purposes, I would like to simulate a delay in sending messages, so I know what it'll be like for users around the world.
Is there any effective way of doing this?
If it's the messages you send from the server that you want to delay, you can override the .emit() method on each new connection with one that adds a short delay. Here's one way of doing that on the server:
io.on('connection', function(socket) {
console.log("socket connected: ", socket.id);
// override the .emit() method
const emitFn = socket.emit
socket.emit = (...args) => setTimeout(() => {
emitFn.apply(socket, args)
}, 1000)
// rest of your connection handler here
});
Note, there is one caveat with this. If you pass an object or an array as the data for socket.emit(), you will see that this code does not make a copy of that data so the data will not be actually used until the data is sent (1 second from now). So, if the code doing the sending actually modifies that data before it is sent one second from now, that would likely create a problem. This could be fixed by making a copy of the incoming data, but I did not add that complexity here as it would not always be needed since it depends upon how the caller's code works.
An old but still popular question. :)
You can use either "iptables" or "tc" to simulate delays/dropped-packets. See the man page for "iptables" and look for 'statistic'. I suggest you make sure to specify the port or your ssh session will get affected.
Here are some good examples for "tc":
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)
(Using Sails.js)
I am testing webworker-threads ( https://www.npmjs.com/package/webworker-threads ) for long running processes on Node and the following example looks good:
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
this.onmessage = function (event) {
try{
postMessage(fibo(event.data));
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
But as soon as I add any code to query Mongodb, it throws an exception:
(not using the Sails model in the query, just to make sure the code could run on its own -- db has no password)
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
// MY DB TEST -- THIS WORKS FINE OUTSIDE THE WORKER
function callDb(event){
var db = require('monk')('localhost/mydb');
var users = db.get('users');
users.find({ "firstName" : "John"}, function (err, docs){
console.log(("serviceSuccess"));
return fibo(event.data);
});
}
this.onmessage = function (event) {
try{
postMessage(callDb(event.data)); // calling db function now
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
Since the DB code works perfectly fine outside the Worker, I think it has something to do with the require. I've tried something that also works outside the Worker, like
var moment = require("moment");
var deadline = moment().add(30, "s");
And the code also throws an exception. Unfortunately, console.log only shows this for all types of errors:
{Object}
{/Object}
So, the questions are: is there any restriction or guideline for using require inside a Worker? What could I be doing wrong here?
UPDATE
it seems Threads will not allow external modules
https://github.com/xk/node-threads-a-gogo/issues/22
TL:DR I think that if you need to require, you should use a node's
cluster or child process. If you want to offload some cpu busy work,
you should use tagg and the load function to grab any helpers you
need.
Upon reading this thread, I see that this question is similar to this one:
Load Nodejs Module into A Web Worker
To which Audreyt, the webworker-threads author answered:
author of webworker-threads here. Thank you for using the module!
There is a default native_fs_ object with the readFileSync you can use
to read files.
Beyond that, I've mostly relied on onejs to compile all required
modules in package.json into a single JS file for importScripts to
use, just like one would do when deploying to a client-side web worker
environment. (There are also many alternatives to onejs -- browserify,
etc.)
Hope this helps!
So it seems importScripts is the way to go. But at this point, it might be too hacky for what I want to do, so probably KUE is a more mature solution.
I'm a collaborator on the node-webworker-threads project.
You can't require in node-webworker-threads
You are correct in your update: node-webworker-threads does not (currently) support requireing external modules.
It has limited support for some of the built-ins, including file system calls and a version of console.log. As you've found, the version of console.log implemented in node-webworker-threads is not identical to the built-in console.log in Node.js; it does not, for example, automatically make nice string representations of the components of an Object.
In some cases you can use external modules, as outlined by audreyt in her response. Clearly this is not ideal, and I view the incomplete require as the primary "dealbreaker" of node-webworker-threads. I'm hoping to work on it this summer.
When to use node-webworker-threads
node-webworker-threads allows you to code against the WebWorker API and run the same code in the client (browser) and the server (Node.js). This is why you would use node-webworker-threads over node-threads-a-gogo.
node-webworker-threads is great if you want the most lightweight possible JavaScript-based workers, to do something CPU-bound. Examples: prime numbers, Fibonacci, a Monte Carlo simulation, offloading built-in but potentially-expensive operations like regular expression matching.
When not to use node-webworker-threads
node-webworker-threads emphasizes portability over convenience. For a Node.js-only solution, this means that node-webworker-threads is not the way to go.
If you're willing to compromise on full-stack portability, there are two ways to go: speed and convenience.
For speed, try a C++ add-on. Use NaN. I recommend Scott Frees's C++ and Node.js Integration book to learn how to do this, it'll save you a lot of time. You'll pay for it in needing to brush up on your C++ skills, and if you want to work with MongoDB then this probably isn't a good idea.
For convenience, use a Child Process-based worker pool like fork-pool. In this case, each worker is a full-fledged Node.js instance. You can then require to your heart's content. You'll pay for it in a larger application footprint and in higher communication costs compared to node-webworker-threads or a C++ add-on.
I'm about to start coding a chat bot. However, I plan on running more than one, using a wrapper to communicate and restart them. I have done this in the past with child_process.fork(), but it was incredibly inefficient. I've looked into spawn and cluster as well, but they all seem to focus on running the same thing, not unique bots. As for plugins, I've looked into fleet, forkfriend, and workerfarm, but none seem to fit my needs.
Is there any plugin or way I'm not seeing to help me do this? Or am I just going to have o wing it again?
You can have as many chat bots as you wish in a single process. The rule of thumb in Node.js is using one process per processor core since Node has slightly different multithreading model you might got used to.
Assuming you still need some multithreading on top of this, here is a couple of node modules you might find fitting your needs:
node-webworker-threads, dnode.
UPDATE:
Now I see what you need. There is a nice example in Node.js docs, which I saw recently. I just copy & paste it here:
var normal = require('child_process').fork('child.js', ['normal']);
var special = require('child_process').fork('child.js', ['special']);
// Open up the server and send sockets to child
var server = require('net').createServer();
server.on('connection', function (socket) {
// if this is a VIP
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// just the usual dudes
normal.send('socket', socket);
});
server.listen(1337);
child.js looks like this:
process.on('message', function(m, socket) {
if (m === 'socket') {
socket.end('You were handled as a ' + process.argv[2] + ' person');
}
});
I believe it's pretty much what you need. Launch several processes with different configs (if number of configs is relatively low) and pass socket to a particular one from master process.
I'm writing a Node.js application using a global event emitter. In other words, my application is built entirely around events. I find this kind of architecture working extremely well for me, with the exception of one side case which I will describe here.
Note that I do not think knowledge of Node.js is required to answer this question. Therefore I will try to keep it abstract.
Imagine the following situation:
A global event emitter (called mediator) allows individual modules to listen for application-wide events.
A HTTP Server is created, accepting incoming requests.
For each incoming request, an event emitter is created to deal with events specific to this request
An example (purely to illustrate this question) of an incoming request:
mediator.on('http.request', request, response, emitter) {
//deal with the new request here, e.g.:
response.send("Hello World.");
});
So far, so good. One can now extend this application by identifying the requested URL and emitting appropriate events:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
emitter.emit('root');
}
else {
emitter.emit('404');
}
});
Following this one can write a module that will deal with a root request.
mediator.on('http.request', function(request, response, emitter) {
//when root is requested
emitter.once('root', function() {
response.send('Welcome to the frontpage.');
});
});
Seems fine, right? Actually, it is potentially broken code. The reason is that the line emitter.emit('root') may be executed before the line emitter.once('root', ...). The result is that the listener never gets executed.
One could deal with this specific situation by delaying the emission of the root event to the end of the event loop:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
process.nextTick(function() {
emitter.emit('root');
});
}
else {
process.nextTick(function() {
emitter.emit('404');
});
}
});
The reason this works is because the emission is now delayed until the current event loop has finished, and therefore all listeners have been registered.
However, there are many issues with this approach:
one of the advantages of such event based architecture is that emitting modules do not need to know who is listening to their events. Therefore it should not be necessary to decide whether the event emission needs to be delayed, because one cannot know what is going to listen for the event and if it needs it to be delayed or not.
it significantly clutters and complexifies code (compare the two examples)
it probably worsens performance
As a consequence, my question is: how does one avoid the need to delay event emission to the next tick of the event loop, such as in the described situation?
Update 19-01-2013
An example illustrating why this behavior is useful: to allow a http request to be handled in parallel.
mediator.on('http.request', function(req, res) {
req.onceall('json.parsed', 'validated', 'methodoverridden', 'authenticated', function() {
//the request has now been validated, parsed as JSON, the kind of HTTP method has been overridden when requested to and it has been authenticated
});
});
If each event like json.parsed would emit the original request, then the above is not possible because each event is related to another request and you cannot listen for a combination of actions executed in parallel for a specific request.
Having both a mediator which listens for events and an emitter which also listens and triggers events seems overly complicated. I'm sure there is a legit reason but my suggestion is to simplify. We use a global eventBus in our nodejs service that does something similar. For this situation, I would emit a new event.
bus.on('http:request', function(req, res) {
if (req.url === '/')
bus.emit('ns:root', req, res);
else
bus.emit('404');
});
// note the use of namespace here to target specific subsystem
bus.once('ns:root', function(req, res) {
res.send('Welcome to the frontpage.');
});
It sounds like you're starting to run into some of the disadvantages of the observer pattern (as mentioned in many books/articles that describe this pattern). My solution is not ideal – assuming an ideal one exists – but:
If you can make a simplifying assumption that the event is emitted only 1 time per emitter (i.e. emitter.emit('root'); is called only once for any emitter instance), then perhaps you can write something that works like jQuery's $.ready() event.
In that case, subscribing to emitter.once('root', function() { ... }) will check whether 'root' was emitted already, and if so, will invoke the handler anyway. And if 'root' was not emitted yet, it'll defer to the normal, existing functionality.
That's all I got.
I think this architecture is in trouble, as you're doing sequential work (I/O) that requires definite order of actions but still plan to build app on components that naturally allow non-deterministic order of execution.
What you can do
Include context selector in mediator.on function e.g. in this way
mediator.on('http.request > root', function( .. ) { } )
Or define it as submediator
var submediator = mediator.yield('http.request > root');
submediator.on(function( ... ) {
emitter.once('root', ... )
});
This would trigger the callback only if root was emitted from http.request handler.
Another trickier way is to make background ordering, but it's not feasible with your current one mediator rules them all interface. Implement code so, that each .emit call does not actually send the event, but puts the produced event in list. Each .once puts consume event record in the same list. When all mediator.on callbacks have been executed, walk through the list, sort it by dependency order (e.g. if list has first consume 'root' and then produce 'root' swap them). Then execute consume handlers in order. If you run out of events, stop executing.
Oi, this seems like a very broken architecture for a few reasons:
How do you pass around request and response? It looks like you've got global references to them.
If I answer your question, you will turn your server into a pure synchronous function and you'd lose the power of async node.js. (Requests would be queued effectively, and could only start executing once the last request is 100% finished.)
To fix this:
Pass request & response to the emit() call as parameters. Now you don't need to force everything to run synchronously anymore, because when the next component handles the event, it will have a reference to the right request & response objects.
Learn about other common solutions that don't need a global mediator. Look at the pattern that Connect was based on many Internet-years ago: http://howtonode.org/connect-it <- describes middleware/onion routing