How can I listen for changes in the network connectivity?
Do you know any implementation or module that accomplish this?
I'm wondering if exists something similar to:
reachability.on('change' function(){...});
reachability.on('connect' function(){...});
reachability.on('disconnect' function(){...});
I've googled it and couldn't find anything about it.
Any help is appreciated. Thank you
There is no such functionality built-in to node. You might be able to hook into the OS to listen for the network interface going up or down or even an ethernet cable being unplugged, but any other type of connectivity loss is going to be difficult to determine instantly.
The easiest way to detect dead connections is to use an application-level ping/heartbeat mechanism and/or a timeout of some kind.
If the network connectivity detection is not specific to a particular network request, you could do something like this to globally test network connectivity by continually pinging some well-connected system that responds to pings. Example:
var EventEmitter = require('events').EventEmitter,
spawn = require('child_process').spawn,
rl = require('readline');
var RE_SUCCESS = /bytes from/i,
INTERVAL = 2, // in seconds
IP = '8.8.8.8';
var proc = spawn('ping', ['-v', '-n', '-i', INTERVAL, IP]),
rli = rl.createInterface(proc.stdout, proc.stdin),
network = new EventEmitter();
network.online = false;
rli.on('line', function(str) {
if (RE_SUCCESS.test(str)) {
if (!network.online) {
network.online = true;
network.emit('online');
}
} else if (network.online) {
network.online = false;
network.emit('offline');
}
});
// then just listen for the `online` and `offline` events ...
network.on('online', function() {
console.log('online!');
}).on('offline', function() {
console.log('offline!');
});
Related
I'm using the code below to try send OSC messages to a computer on the network. I'm using a package called osc.
I'm unable to send messages to the machine running the OSC server and receive the error below when attempting to send OSC messages:
Error: Uncaught, unspecified "error" event. (Can't send packets on a closed osc.Port object. Please open (or reopen) this Port by calling open().)
Code
let osc = require('osc');
let oscUDP = new osc.UDPPort({
remoteAddress: "192.168.1.5",
remotePort: 8004
});
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
oscUDP.open();
If I put oscUDP.open() before the send call I get a different error:
Error: send EINVAL 192.168.1.5:8004
at Object.exports._errnoException (util.js:1007:11)
at exports._exceptionWithHostPort (util.js:1030:20)
at SendWrap.afterSend [as oncomplete] (dgram.js:402:11)
I am running OSCulator on OSX as the server. The code above lives on a different machine. When I run nmap on the IP address the port is open:
nmap 192.168.1.5 -p 8004
Starting Nmap 6.40 ( http://nmap.org ) at 2016-08-30 08:22 BST
Nmap scan report for 192.168.1.5
Host is up (0.13s latency).
PORT STATE SERVICE
8004/tcp open unknown
If I use osc-cli the messages are received on the machine running the OSC server:
osc --host 192.168.1.5:8004 /test 1 2 3
So it would seem the problem isn't with closed ports at all as the messages are sent and received when using osc-cli.
Any ideas?
I know I'm coming to this quite late, and it looks like you found a different library that works for you, but I thought a response might be helpful for others who are facing this issue. I'm the developer of osc.js, the original library you were trying to use.
First off, as background information, osc.js is factored into two different layers:
The low-level API that provides functions for reading and writing OSC messages and bundles to/from Typed Arrays.
The higher-level, event-based Port API, which provides a collection of platform-specific transport objects, which offer an easy way to do bidirectional communication over protocols like UDP, Web Sockets, etc.
In the case of your example code, you were trying to send an OSC message on your UDPPort object prior to it being ready. When you open() a Port, it may need to perform asynchronous operations such as opening up a socket, etc. As a result, it fires an event (aptly called ready) when the Port is all set to be used. Until ready fires, you won't be able to send or receive OSC packets.
So in the case of your original code, it looks like you were assuming that this line was synchronous and that you could call send() immediately afterwards:
oscUDP.open();
Instead, you just needed to listen for the ready event prior to attempting to send a message on the Port. Like this:
oscUDP.on("ready", function () {
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
});
The osc.js Node.js example illustrates this pattern. But when I saw your question, I realized that the sample code in the osc.js README was a bit ambiguous in this regard. I have improved the event documentation and the inline README sample code to be more clear in this regard. Sorry for the confusion.
There are cases, perhaps such as yours, where the higher-level API isn't quite what you need. osc.js also provides functions for easily encoding an OSC packet as a Uint8Array, which can be converted into a Node.js buffers. So you could have done something similar to your solution just by using osc.js' osc.writeMessage() function. It has always been quite well documented, fortunately. Here's your example, modified to use osc.js' low-level API:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let oscNoteMessage = function(note, value) {
var message = osc.writeMessage({
address: '/note/' + note,
args: [
{
type: 'i',
value: value
}
]
});
return Buffer.from(message);
}
let noteOn = function(note) {
return oscNoteMessage(note, 1);
}
let noteOff = function(note) {
return oscNoteMessage(note, 0);
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);
Anyway, I'm glad you were able to come up with a solution that works for your project, and I hope this response helps other users who may encounter similar issues. And of course, feel free to ask questions or file issues on the osc.js issue tracker.
Best regards, and apologies for the trouble you experienced using the library!
I figured it's actually pretty easy to send OSC data over UDP without the need for any packages except a2r-osc which is used for encoding OSC data.
I'm posting the solution incase anyone else in interested:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('a2r-osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let noteOn = function(note) {
return new osc.Message('/note/' + note, 'i', 1).toBuffer();
}
let noteOff = function(note) {
return new osc.Message('/note/' + note, 'i', 0).toBuffer();
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);
Ok, I have an express-powered API where I also have socket.io running to receive/send realtime events...all works just dandy. I need to cluster my app. I set everything up based on the below code. I spin up workers, they get connections and everything works, except the fact that now I can't "blast" to all socket.io connections. Here is the setup (taken from this):
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 3000,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(worker, code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing the dots,
// then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var workerIndex = function (ip, len) {
var _ip = ip.split(/['.'|':']/),
arr = [];
for (el in _ip) {
if (_ip[el] == '') {
arr.push(0);
}
else {
arr.push(parseInt(_ip[el], 16));
}
}
return Number(arr.join('')) % len;
}
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
So I connect from various machines to test concurrency, workers do their thing and all is good, but when I get an IO connection, I'm logging the TOTAL "connected" count and it's always 1 per instance. I need a way to say
allClusterForks.emit(stuff)
I get the connection on the correct worker pid, but "ALL CONNECTIONS" always returns 1.
io.on('connection', function(socket) {
console.log('Connected to worker %s', process.pid);
console.log("Adapter ROOMS %s ", io.sockets.adapter.rooms);
console.log("Adapter SIDS %s ", io.sockets.adapter.sids);
console.log("SOCKETS CONNECTED %s ", Object.keys(io.sockets.connected).length);
});
I can see the subscribe/unsubscribe coming in using Redis MONITOR
1454701383.188231 [0 127.0.0.1:63150] "subscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
1454701419.130100 [0 127.0.0.1:63167] "subscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701433.842727 [0 127.0.0.1:63167] "unsubscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701444.630427 [0 127.0.0.1:63150] "unsubscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
These are connections from 2 different machines, I would expect by using the socket io redis adapter that these subscriptions would be coming in on the same redis connection, but they are different.
Am I just totally missing something? There's a surprising lack of documentation/articles out there for this that aren't either completely outdated/wrong/ambiguous.
EDIT:
Node v5.3.0
Redis v3.0.6
Socket.io v1.3.7
So if anyone comes across this, I figured out that actually "looking" at the counts of connected sockets across processes is not a thing, but broadcasting or emitting to them is. So I've basically just been "testing" for no reason. All works as expected. I WILL be rewriting the socket.io-redis adapter to allow checking counts across processes.
There was a pull request a few years ago to implement support for what I was trying to do. https://github.com/socketio/socket.io-redis/pull/15 and I might try cleaning that up and re-submitting.
I need to make 3 tcp connections, it's hard to tell which connection is successful in the call back for "connect" event.
var clients = [];
var ports = [81,82,83];
for (i=0; i<3; i++) {
clients[i] = net.createConnection(ports[i], '127.0.0.1');
clients[i].on('connect', function(conn) {
console.log("connect is setup");
console.log(conn); //it's always undefined, why???
//need to set different data to the different connections
});
}
The argument provided to the connect event is a potential connection error, so, you're checking if there is an error. If I recall correctly, console.log(this); has information about the socket. Your main reference to the socket is also clients[i]
I have a server that uses socket.io and I need a way of throttling a client that is sending the server data too quickly. The server exposes both a TCP interface and a socket.io interface - with the TCP server (from the net module) I can use socket.pause() and socket.resume(), and this effectively throttles the client. But with socket.io's socket class there are no pause() and resume() methods.
What would be the easiest way of getting feedback to a client that it is overwhelming the server and needs to slow down? I liked socket.pause() and socket.resume() because it didn't require any additional code on the client-side - backup the TCP socket and things naturally slow down. Any equivalent for socket.io?
Update: I provide an API to interact with the server (there is currently a python version which runs over TCP and a JavaScript version which uses socket.io). So I don't have any real control over what the client does. Which is why using socket.pause() and socket.resume() is so great - backing up the TCP stream slows the python client down no matter what it tries to do. I'm looking for an equivalent for a JavaScript client.
With enough digging I found this:
this.manager.transports[this.id].socket.pause();
and
this.manager.transports[this.id].socket.resume();
Granted this probably won't work if the socket.io connection isn't a web sockets connection, and may break in a future update, but for now I'm going to go with it. When I get some time in the future I'll probably change it to the QUOTA_EXCEEDED solution that Pascal proposed.
Here is a dirty way to achieve throttling. Although this is a old post; some people may benefit from it:
First register a middleware:
io.on("connection", function (socket) {
socket.use(function (packet, next) {
if (throttler.canBeServed(socket, packet)) {
next();
}
});
//You other code ..
});
canBeServed is a simple throttler as seen below:
function canBeServed(socket, packet) {
if (socket.markedForDisconnect) {
return false;
}
var previous = socket.lastAccess;
var now = Date.now();
if (previous) {
var diff = now - previous;
//Check diff and disconnect if needed.
if (diff < 50) {
socket.markedForDisconnect = true;
setTimeout(function () {
socket.disconnect(true);
}, 1000);
return false;
}
}
socket.lastAccess = now;
return true;
}
You can use process.hrtime() instead of Date.time().
If you have a callback on your server somewhere which normally sends back the response to your client, you could try and change it like this:
before:
var respond = function (res, callback) {
res.send(data);
};
after
var respond = function (res, callback) {
setTimeout(function(){
res.send(data);
}, 500); // or whatever delay you want.
};
Looks like you should slow down your clients. If one client can send too fast for your server to keep up, this is not going to go very well with 100s of clients.
One way to do this would be have the client wait for the reply for each emit before emitting anything else. This way the server can control how fast the client can send by only answering when ready for example, or only answer after a set time.
If this is not enough, when a client exceeded x requests per second, start replying with something like QUOTA_EXCEEDED error, and ignore the data they send in. This will force external developers to make their app behave as you want them to do.
As another suggestion, I would propose a solution like this:
It is common for MySQL to get a large amount of requests which would take longer time to apply than the rate the requests coming in.
The server can record the requests in a table in db assuming this action is fast enough for the rate the requests are coming in and then process the queue at a normal rate for the server to sustain. This buffer system will allow the server to run slow but still process all the requests.
But if you want something sequential, then the request callback should be verified before the client can send another request. In this case, there should be a server ready flag. If the client is sending request while the flag is still red, then there can be a message telling the client to slow down.
simply wrap your client emitter into a function like below
let emit_live_users = throttle(function () {
socket.emit("event", "some_data");
}, 2000);
using use a throttle function like below
function throttle(fn, threshold) {
threshold = threshold || 250;
var last, deferTimer;
return function() {
var now = +new Date, args = arguments;
if(last && now < last + threshold) {
clearTimeout(deferTimer);
deferTimer = setTimeout(function() {
last = now;
fn.apply(this, args);
}, threshold);
} else {
last = now;
fn.apply(this, args);
}
}
}
I've written a small Socket.IO server, which works fine, I can connect to it, I can send/receive messages, so everything is working ok. Just the relevant part of the code is presented here:
var RedisStore = require('socket.io/lib/stores/redis');
const pub = redis.createClient('127.0.0.1', 6379);
const sub = redis.createClient('127.0.0.1', 6379);
const store = redis.createClient('127.0.0.1', 6379);
io.configure(function() {
io.set('store', new RedisStore({
redisPub : pub,
redisSub : sub,
redisClient : store
}));
});
io.sockets.on('connection', function(socket) {
socket.on('message', function(msg) {
pub.publish("lobby", msg);
});
/*
* Subscribe to the lobby and receive messages.
*/
var sub = redis.createClient('127.0.0.1', 6379);
sub.subscribe("lobby");
sub.on('message', function(channel, msg) {
socket.send(msg);
});
});
I've also written a script presented below that connects to the server and spawns connections in the setInterval function, which spawns a new connection each 10milisecons, so it's spawning quite a lot of connections.
#!/usr/bin/env node
var io = require('socket.io-client');
var reconn = {'force new connection': true};
var sockets = [];
var num = 1000;
function startSocket(i) {
sockets[i] = io.connect("http://127.0.0.1:8080", reconn);
sockets[i].on('connect', function() {
console.log("Socket["+i+"] connected.");
});
sockets[i].on('message', function(msg) {
console.log("Socket["+i+"] Message received: "+msg);
});
}
/*
* Start number of sockets.
*/
for(var i=0; i<num; i++) {
startSocket(i);
}
/*
* Send messages forever.
*/
setInterval(function() {
for(var i=0; i<num; i++) {
sockets[i].send("Hello from socket "+i+".");
}
}, 10);
This script is a benchmark tool spawning 1000 connections to the server, but when running for several minutes, the server dies with the following error message:
node.js:0 // Copyright Joyent, Inc. and other Node contributors. ^
RangeError: Maximum call stack size exceeded
I know that there's not enough stack space available so the exception occurs and the process is terminated, but even if I enlarge the stack with the --stack-size variable, this doesn't actually solve the problem, because I can always spawn more connections, which will eventually kill the server.
My question is: how can I prevent this. This is an effective DoS scenario, where anybody can hack together this little script and force the node server to terminate, but I would like to prevent this from happening. I would like Node server to never terminate, just process messages slowly.
Any ideas if this can be prevented. I'm not sure that I would like to block IPs, since I would also like mobile phones to login to the system, where many of them use the same IP, so the node server can mistakenly think a DoS is being in place by one mobile network operation and blocks its IP.
Thank you
If you would like your node server to run forever, no matter what, use https://github.com/nodejitsu/forever
As for the Exception - My hunch is that var sub = redis.createClient('127.0.0.1', 6379); may allocate a variable in the stack each time a connection is established.
I would first try to put var subs = [] in the global scope and
subs[socket.id] = redis.createClient('127.0.0.1', 6379);
Or something like socket.sub = redis.createClient('127.0.0.1', 6379); to piggyback the existing, hopefully heap based, socket.io data structures.
If not working, try to isolate the problem by removing the use of Redis...