Node-Red: Create server and share input - node.js

I'm trying to create a new node for Node-Red. Basically it is a udp listening socket that shall be established via a config node and which shall pass all incoming messages to dedicated nodes for processing.
This is the basic what I have:
function udpServer(n) {
RED.nodes.createNode(this, n);
this.addr = n.host;
this.port = n.port;
var node = this;
var socket = dgram.createSocket('udp4');
socket.on('listening', function () {
var address = socket.address();
logInfo('UDP Server listening on ' + address.address + ":" + address.port);
});
socket.on('message', function (message, remote) {
var bb = new ByteBuffer.fromBinary(message,1,0);
var CoEdata = decodeCoE(bb);
if (CoEdata.type == 'digital') { //handle digital output
// pass to digital handling node
}
else if (CoEdata.type == 'analogue'){ //handle analogue output
// pass to analogue handling node
}
});
socket.on("error", function (err) {
logError("Socket error: " + err);
socket.close();
});
socket.bind({
address: node.addr,
port: node.port,
exclusive: true
});
node.on("close", function(done) {
socket.close();
});
}
RED.nodes.registerType("myServernode", udpServer);
For the processing node:
function ProcessAnalog(n) {
RED.nodes.createNode(this, n);
var node = this;
this.serverConfig = RED.nodes.getNode(this.server);
this.channel = n.channel;
// how do I get the server's message here?
}
RED.nodes.registerType("process-analogue-in", ProcessAnalog);
I can't figure out how to pass the messages that the socket receives to a variable number of processing nodes, i.e. multiple processing nodes shall share on server instance.
==== EDIT for more clarity =====
I want to develop a new set of nodes:
One Server Node:
Uses a config-node to create an UDP listening socket
Managing the socket connection (close events, error etc)
Receives data packages with one to many channels of different data
One to many processing nodes
The processing nodes shall share the same connection that the Server Node has established
The processing nodes shall handle the messages that the server is emitting
Possibly the Node-Red flow would use as many processing Nodes as there are channels in the server's data package
To quote the Node-Red documentation on config-nodes:
A common use of config nodes is to represent a shared connection to a
remote system. In that instance, the config node may also be
responsible for creating the connection and making it available to the
nodes that use the config node. In such cases, the config node should
also handle the close event to disconnect when the node is stopped.
As far as I understood this, I make the connection available via this.serverConfig = RED.nodes.getNode(this.server); but I cannot figure out how to pass data, which is received by this connection, to the node that is using this connection.

A node has no knowledge of what nodes it is connected to downstream.
The best you can do from the first node is to have 2 outputs and to send digital to one and analogue to the other.
You would do this by passing an array to the node.send() function.
E.g.
//this sends output to just the first output
node.sent([msg,null]);
//this sends output to just the second output
node.send([null,msg]);
Nodes that have receive messagess need to add a listener for input
e.g.
node.on('input', function(msg) {
...
});
All of this is well documented on the Node-RED page
The other option is if the udpServer node is a config node then you need to implement your own listeners, best bet is to look something like the MQTT nodes in core for examples of pooling connections

Related

Unable to send OSC messages with node's osc package. Port closed error, even though the port on the machine is open

I'm using the code below to try send OSC messages to a computer on the network. I'm using a package called osc.
I'm unable to send messages to the machine running the OSC server and receive the error below when attempting to send OSC messages:
Error: Uncaught, unspecified "error" event. (Can't send packets on a closed osc.Port object. Please open (or reopen) this Port by calling open().)
Code
let osc = require('osc');
let oscUDP = new osc.UDPPort({
remoteAddress: "192.168.1.5",
remotePort: 8004
});
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
oscUDP.open();
If I put oscUDP.open() before the send call I get a different error:
Error: send EINVAL 192.168.1.5:8004
at Object.exports._errnoException (util.js:1007:11)
at exports._exceptionWithHostPort (util.js:1030:20)
at SendWrap.afterSend [as oncomplete] (dgram.js:402:11)
I am running OSCulator on OSX as the server. The code above lives on a different machine. When I run nmap on the IP address the port is open:
nmap 192.168.1.5 -p 8004
Starting Nmap 6.40 ( http://nmap.org ) at 2016-08-30 08:22 BST
Nmap scan report for 192.168.1.5
Host is up (0.13s latency).
PORT STATE SERVICE
8004/tcp open unknown
If I use osc-cli the messages are received on the machine running the OSC server:
osc --host 192.168.1.5:8004 /test 1 2 3
So it would seem the problem isn't with closed ports at all as the messages are sent and received when using osc-cli.
Any ideas?
I know I'm coming to this quite late, and it looks like you found a different library that works for you, but I thought a response might be helpful for others who are facing this issue. I'm the developer of osc.js, the original library you were trying to use.
First off, as background information, osc.js is factored into two different layers:
The low-level API that provides functions for reading and writing OSC messages and bundles to/from Typed Arrays.
The higher-level, event-based Port API, which provides a collection of platform-specific transport objects, which offer an easy way to do bidirectional communication over protocols like UDP, Web Sockets, etc.
In the case of your example code, you were trying to send an OSC message on your UDPPort object prior to it being ready. When you open() a Port, it may need to perform asynchronous operations such as opening up a socket, etc. As a result, it fires an event (aptly called ready) when the Port is all set to be used. Until ready fires, you won't be able to send or receive OSC packets.
So in the case of your original code, it looks like you were assuming that this line was synchronous and that you could call send() immediately afterwards:
oscUDP.open();
Instead, you just needed to listen for the ready event prior to attempting to send a message on the Port. Like this:
oscUDP.on("ready", function () {
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
});
The osc.js Node.js example illustrates this pattern. But when I saw your question, I realized that the sample code in the osc.js README was a bit ambiguous in this regard. I have improved the event documentation and the inline README sample code to be more clear in this regard. Sorry for the confusion.
There are cases, perhaps such as yours, where the higher-level API isn't quite what you need. osc.js also provides functions for easily encoding an OSC packet as a Uint8Array, which can be converted into a Node.js buffers. So you could have done something similar to your solution just by using osc.js' osc.writeMessage() function. It has always been quite well documented, fortunately. Here's your example, modified to use osc.js' low-level API:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let oscNoteMessage = function(note, value) {
var message = osc.writeMessage({
address: '/note/' + note,
args: [
{
type: 'i',
value: value
}
]
});
return Buffer.from(message);
}
let noteOn = function(note) {
return oscNoteMessage(note, 1);
}
let noteOff = function(note) {
return oscNoteMessage(note, 0);
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);
Anyway, I'm glad you were able to come up with a solution that works for your project, and I hope this response helps other users who may encounter similar issues. And of course, feel free to ask questions or file issues on the osc.js issue tracker.
Best regards, and apologies for the trouble you experienced using the library!
I figured it's actually pretty easy to send OSC data over UDP without the need for any packages except a2r-osc which is used for encoding OSC data.
I'm posting the solution incase anyone else in interested:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('a2r-osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let noteOn = function(note) {
return new osc.Message('/note/' + note, 'i', 1).toBuffer();
}
let noteOff = function(note) {
return new osc.Message('/note/' + note, 'i', 0).toBuffer();
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);

Node Cluster issue using Socket.io and Redis

Ok, I have an express-powered API where I also have socket.io running to receive/send realtime events...all works just dandy. I need to cluster my app. I set everything up based on the below code. I spin up workers, they get connections and everything works, except the fact that now I can't "blast" to all socket.io connections. Here is the setup (taken from this):
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 3000,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(worker, code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing the dots,
// then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var workerIndex = function (ip, len) {
var _ip = ip.split(/['.'|':']/),
arr = [];
for (el in _ip) {
if (_ip[el] == '') {
arr.push(0);
}
else {
arr.push(parseInt(_ip[el], 16));
}
}
return Number(arr.join('')) % len;
}
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
So I connect from various machines to test concurrency, workers do their thing and all is good, but when I get an IO connection, I'm logging the TOTAL "connected" count and it's always 1 per instance. I need a way to say
allClusterForks.emit(stuff)
I get the connection on the correct worker pid, but "ALL CONNECTIONS" always returns 1.
io.on('connection', function(socket) {
console.log('Connected to worker %s', process.pid);
console.log("Adapter ROOMS %s ", io.sockets.adapter.rooms);
console.log("Adapter SIDS %s ", io.sockets.adapter.sids);
console.log("SOCKETS CONNECTED %s ", Object.keys(io.sockets.connected).length);
});
I can see the subscribe/unsubscribe coming in using Redis MONITOR
1454701383.188231 [0 127.0.0.1:63150] "subscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
1454701419.130100 [0 127.0.0.1:63167] "subscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701433.842727 [0 127.0.0.1:63167] "unsubscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701444.630427 [0 127.0.0.1:63150] "unsubscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
These are connections from 2 different machines, I would expect by using the socket io redis adapter that these subscriptions would be coming in on the same redis connection, but they are different.
Am I just totally missing something? There's a surprising lack of documentation/articles out there for this that aren't either completely outdated/wrong/ambiguous.
EDIT:
Node v5.3.0
Redis v3.0.6
Socket.io v1.3.7
So if anyone comes across this, I figured out that actually "looking" at the counts of connected sockets across processes is not a thing, but broadcasting or emitting to them is. So I've basically just been "testing" for no reason. All works as expected. I WILL be rewriting the socket.io-redis adapter to allow checking counts across processes.
There was a pull request a few years ago to implement support for what I was trying to do. https://github.com/socketio/socket.io-redis/pull/15 and I might try cleaning that up and re-submitting.

How to pass an active WebSocket to a clustered thread in Node.js?

In Node.js they expose a handy way to pass net.Sockets to child processes (cluster.Worker) via:
var socket; // some instance of net.Socket
var worker = process.fork();
worker.on("online", function() {
worker.send("socket", socket);
});
Which is super cool and works handily. But how would I do this with a WebSocket connection? I'm open to try any module.
Currently I've tried using various modules like ws. Most of them store the initial net.Socket HTTP Request and then upgrade it, but none seem simple enough to pass to the child process as a net.Socket because they need tons of handshake info needed by the WebSocket spec, so far as I can tell.
I know there are hackish solutions, like opening a WebSocket server on the child process on a unique port, then telling the WebScoket connection to reconnect on that port, but then I need an open port for every child thread. Or, piping all data to the WebSocket connection through process.send so the main thread does all the io, but that defeats some of the performance benefits by running stuff on multiple threads.
So does anyone have any ideas?
Welp I figured it out. ws may have been too much for my intended purposes. Instead I found a pretty obscure WebSocket library, lark-websocket which exposes a function that given a net.Socket can wrap it up in in their Client class and work with it as a WebSocket. The only issue was both the parent and child threads would then try to ping the connection on the other end so I had to fork it and add a way for the parent thread to pause pinging.
Here's some example code for anyone interested:
var cluster = require("cluster");
var ws = require('lark-websocket');
if(cluster.isMaster) { // make a child process and pipe all ws connections to it
var worker = cluster.fork();
worker.once("online", function() {
console.log("worker online with pid", worker.process.pid);
})
ws.createServer(function(client, request){
worker.send("socket", client._socket); // send all websocket clients to the worker thread
}).listen(27015);
}
else { // we are a worker, so we handle the ws connections
process.on("message", function(message, handler) {
if(message === "socket") { // Note: Node js can only send sockets via handler if message === "socket", because passing sockets between threads is sketchy as fuck
var client = ws.createClient(handler);
client.on('message',function(msg){
console.log("worker " + process.pid + " got:", msg);
client.send("I got your: " + msg);
});
}
});
}

Node.js cluster module appears to break Socket.io handshake

I have the following simple WebSocket server built around the Socket.io library:
var PROCESSES = 1,
cluster = require('cluster'),
i;
if (cluster.isMaster) {
for (i = 0; i < PROCESSES; i++) {
console.log('Forking worker', i);
cluster.fork();
}
} else {
(function () {
var server = require('http').Server(),
io = require('socket.io')(server);
io.on('connection', function (socket) {
socket.on('message', function (message) {
socket.emit('message', message + ' too!');
});
});
server.listen(8080);
})();
}
When started, it creates a single server process which listens for WebSocket connections and echoes a variation of the message back to the client:
$ iocat --socketio ws://localhost:8080
> i am hungry
i am hungry too!
> i like you
i like you too!
>
Now, when I change the PROCESSES variable to a number larger than 1, the client can no longer connect.
var PROCESSES = 2,
...
...results in...
$ iocat --socketio ws://localhost:8080
> client.on error
$ iocat -v --socketio ws://localhost:8080
> SIOClient> SIOClient: url-> ws://localhost:8080
SIOClient> onError { [Error: xhr poll error] description: 400 }
client.on error
My gut feeling is that the cluster module, when given more than one worker process, inappropriately switches from one process to another mid-handshake. But I would have thought that the entire connection, from the client initiating the handshake to the closing of the socket at the very very end, occurred over one persistent, keep-alive'd connection.
So what exactly is going on here? And how could it be worked around? I'm familiar with the idea of using a Redis store to share state between server processes on different machines, but that feels like too much infrastructure for my use case (collecting a stream of events from the client and replying with an acknowledgement).
Versions: socket.io#1.3.3, node#0.10.36, seen on OS X 10.10 and CentOS 6.6
socket.io is not a simple wrapper over WebSockets, it does much more. The opening handshake is an http request to decide on a protocol (WebSocket, polling, flash sockets, etc.) followed by, in your case, probably a WebSocket request. If those hit different processes, the handshake will fail.
socket.io requires that you use sticky sessions, to ensure that a given client hits the same process each time. They suggest using the sticky-session module if you want to use cluster.

Handle new TCP connections synchronously

I know nodejs is asynchronous by nature and it is preferable to use that way, but I have a use case where we need to handle incoming TCP connections in synchronous way. Once a new connections received we need to connect to some other TCP server and perform some book keeping stuff etc and then handle some other connection. Since number of connections are limited, it is fine to handle this in synchronous way.
Looking for an elegant way to handle this scenario.
net.createServer(function(sock) {
console.log('Received a connection - ');
var sock = null;
var testvar = null;
sock = new net.Socket();
sock.connect(PORT, HOST, function() {
console.log('Connected to server - ');
});
//Other listeners
}
In the above code if two connections received simultaneously the output may be (since asynchronous nature):
Received a connection
Receive a connection
Connected to server
Connected to server
But the expectation is:
Received a connection
Connected to server
Receive a connection
Connected to server
What is the proper way of ding this?
One solution is implement a queue kind of solution with emitting 'done' or 'complete' events to handle next connection.
For this we may have to take the connection callback out of the createServer call. How to handle scoping of connection and other variables (testvar) in this case?
In this case what happens to the data/messages if received on connections which are in queue but not yet processed and not yet 'data' listener is registered.?
Any other better solutions will be helpful.
I think it is important to separate the concepts of synchronous code vs serial code. You want to process each request serially, but that can still be accomplished while handling each request asynchronously. For your case, the easiest way would probably be to have a queue of requests to handle instead.
var inProgress = false;
var queue = [];
net.createServer(function(sock){
queue.push(sock);
processQueue();
});
function processQueue(){
if (inProgress || queue.length === 0) return;
inProgress = true;
handleSockSerial(queue.shift(), function(){
inProgress = false;
processQueue();
});
}
function handleSockSerial(sock, callback){
// Do all your stuff and then call 'callback' when you are done.
}
Note, as long as you are using node >= 0.10, the data coming in from the socket will be buffered until you read the data.

Resources