Node.js and LÖVE2D socket communication - node.js

I'm trying to make node.js server and LÖVE2D client to communicate via sockets. (Just a simple "hello world" test.) Both node.js and LÖVE2D are running on the same PC.
I managed to send a message from LÖVE2D to node.js, but I can't read server's answer.
My node.js server code looks like this:
var net = require('net');
var mySocket;
var server = net.createServer(function(socket) {
mySocket = socket;
mySocket.on("connect", onConnect);
mySocket.on("data", onData);
});
function onConnect() {
console.log("Connected to LOVE2D");
}
function onData(d) {
if(d == "exit\0") {
console.log("exit");
mySocket.end();
server.close();
}
else {
console.log("Message from LOVE2D: " + d);
mySocket.write("Message received!", 'utf8');
}
}
server.listen(50000, "localhost");
And client code in LÖVE2D looks like this:
local host, port = "localhost", 50000
local socket = require("socket")
local tcp = assert(socket.tcp())
tcp:connect(host, port)
tcp:send("hello there")
tcp:close()
function love.draw()
love.graphics.print("can't read server answer!", 400, 300)
end
Well, the previous code just sends a message. What syntax should I use to read an answer from node.js server? For example this just gives me an error:
local host, port = "localhost", 50000
local socket = require("socket")
local tcp = assert(socket.tcp())
tcp:connect(host, port)
local answer = tcp:send("hello there")
tcp:close()
function love.draw()
love.graphics.print(answer, 400, 300)
end
Here is some documentation about networking in LÖVE2D & LuaSocket, but the documentation did not help me with this:
http://love2d.org/wiki/Tutorial:Networking_with_UDP
http://w3.impa.br/~diego/software/luasocket/
(Sorry for "noob" question, I'm really new with HTTP protocols and stuff.)

You need to use receive call as well:
tcp:connect(host, port)
tcp:send("hello there\n")
local answer = tcp:receive()
tcp:close()
function love.draw()
love.graphics.print(answer, 400, 300)
end
Be careful with new lines in your messages; the default "pattern" for receive is to read one line (terminated by CR?LF), so if the end of line characters are not present, the receive operation will block waiting for them. The alternative would be to read a certain number of characters, but since you don't know the length of the message, you'd need to come up with some sort of the header (for example, send two bytes first that encode the length of the message that follows).
It's also possible to use a combination: send one line first and include the number of bytes in the payload that will follow (if any). For example "200 OK 135" or "500 ERROR", and then use that length (135 in the OK message) to read: tcp:receive(135).
If you end up using TCP-based protocol, you'll probably need to make it non-blocking, otherwise any network delay will block your game; see this SO answer for some pointers.

Related

Unable to send OSC messages with node's osc package. Port closed error, even though the port on the machine is open

I'm using the code below to try send OSC messages to a computer on the network. I'm using a package called osc.
I'm unable to send messages to the machine running the OSC server and receive the error below when attempting to send OSC messages:
Error: Uncaught, unspecified "error" event. (Can't send packets on a closed osc.Port object. Please open (or reopen) this Port by calling open().)
Code
let osc = require('osc');
let oscUDP = new osc.UDPPort({
remoteAddress: "192.168.1.5",
remotePort: 8004
});
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
oscUDP.open();
If I put oscUDP.open() before the send call I get a different error:
Error: send EINVAL 192.168.1.5:8004
at Object.exports._errnoException (util.js:1007:11)
at exports._exceptionWithHostPort (util.js:1030:20)
at SendWrap.afterSend [as oncomplete] (dgram.js:402:11)
I am running OSCulator on OSX as the server. The code above lives on a different machine. When I run nmap on the IP address the port is open:
nmap 192.168.1.5 -p 8004
Starting Nmap 6.40 ( http://nmap.org ) at 2016-08-30 08:22 BST
Nmap scan report for 192.168.1.5
Host is up (0.13s latency).
PORT STATE SERVICE
8004/tcp open unknown
If I use osc-cli the messages are received on the machine running the OSC server:
osc --host 192.168.1.5:8004 /test 1 2 3
So it would seem the problem isn't with closed ports at all as the messages are sent and received when using osc-cli.
Any ideas?
I know I'm coming to this quite late, and it looks like you found a different library that works for you, but I thought a response might be helpful for others who are facing this issue. I'm the developer of osc.js, the original library you were trying to use.
First off, as background information, osc.js is factored into two different layers:
The low-level API that provides functions for reading and writing OSC messages and bundles to/from Typed Arrays.
The higher-level, event-based Port API, which provides a collection of platform-specific transport objects, which offer an easy way to do bidirectional communication over protocols like UDP, Web Sockets, etc.
In the case of your example code, you were trying to send an OSC message on your UDPPort object prior to it being ready. When you open() a Port, it may need to perform asynchronous operations such as opening up a socket, etc. As a result, it fires an event (aptly called ready) when the Port is all set to be used. Until ready fires, you won't be able to send or receive OSC packets.
So in the case of your original code, it looks like you were assuming that this line was synchronous and that you could call send() immediately afterwards:
oscUDP.open();
Instead, you just needed to listen for the ready event prior to attempting to send a message on the Port. Like this:
oscUDP.on("ready", function () {
oscUDP.send({
address: "/carrier/frequency",
args: 440
});
});
The osc.js Node.js example illustrates this pattern. But when I saw your question, I realized that the sample code in the osc.js README was a bit ambiguous in this regard. I have improved the event documentation and the inline README sample code to be more clear in this regard. Sorry for the confusion.
There are cases, perhaps such as yours, where the higher-level API isn't quite what you need. osc.js also provides functions for easily encoding an OSC packet as a Uint8Array, which can be converted into a Node.js buffers. So you could have done something similar to your solution just by using osc.js' osc.writeMessage() function. It has always been quite well documented, fortunately. Here's your example, modified to use osc.js' low-level API:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let oscNoteMessage = function(note, value) {
var message = osc.writeMessage({
address: '/note/' + note,
args: [
{
type: 'i',
value: value
}
]
});
return Buffer.from(message);
}
let noteOn = function(note) {
return oscNoteMessage(note, 1);
}
let noteOff = function(note) {
return oscNoteMessage(note, 0);
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);
Anyway, I'm glad you were able to come up with a solution that works for your project, and I hope this response helps other users who may encounter similar issues. And of course, feel free to ask questions or file issues on the osc.js issue tracker.
Best regards, and apologies for the trouble you experienced using the library!
I figured it's actually pretty easy to send OSC data over UDP without the need for any packages except a2r-osc which is used for encoding OSC data.
I'm posting the solution incase anyone else in interested:
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
const osc = require('a2r-osc');
const HOST = '192.168.1.5';
const PORT = 8004;
process.on('SIGINT', function() {
client.close();
});
let noteOn = function(note) {
return new osc.Message('/note/' + note, 'i', 1).toBuffer();
}
let noteOff = function(note) {
return new osc.Message('/note/' + note, 'i', 0).toBuffer();
}
let send = function(message) {
client.send(message, PORT, HOST, function(err, bytes) {
if(err) throw new Error(err);
})
}
send(noteOn('c'));
setTimeout(function() {
send(noteOff('c'));
}, 1000);

Node-Red: Create server and share input

I'm trying to create a new node for Node-Red. Basically it is a udp listening socket that shall be established via a config node and which shall pass all incoming messages to dedicated nodes for processing.
This is the basic what I have:
function udpServer(n) {
RED.nodes.createNode(this, n);
this.addr = n.host;
this.port = n.port;
var node = this;
var socket = dgram.createSocket('udp4');
socket.on('listening', function () {
var address = socket.address();
logInfo('UDP Server listening on ' + address.address + ":" + address.port);
});
socket.on('message', function (message, remote) {
var bb = new ByteBuffer.fromBinary(message,1,0);
var CoEdata = decodeCoE(bb);
if (CoEdata.type == 'digital') { //handle digital output
// pass to digital handling node
}
else if (CoEdata.type == 'analogue'){ //handle analogue output
// pass to analogue handling node
}
});
socket.on("error", function (err) {
logError("Socket error: " + err);
socket.close();
});
socket.bind({
address: node.addr,
port: node.port,
exclusive: true
});
node.on("close", function(done) {
socket.close();
});
}
RED.nodes.registerType("myServernode", udpServer);
For the processing node:
function ProcessAnalog(n) {
RED.nodes.createNode(this, n);
var node = this;
this.serverConfig = RED.nodes.getNode(this.server);
this.channel = n.channel;
// how do I get the server's message here?
}
RED.nodes.registerType("process-analogue-in", ProcessAnalog);
I can't figure out how to pass the messages that the socket receives to a variable number of processing nodes, i.e. multiple processing nodes shall share on server instance.
==== EDIT for more clarity =====
I want to develop a new set of nodes:
One Server Node:
Uses a config-node to create an UDP listening socket
Managing the socket connection (close events, error etc)
Receives data packages with one to many channels of different data
One to many processing nodes
The processing nodes shall share the same connection that the Server Node has established
The processing nodes shall handle the messages that the server is emitting
Possibly the Node-Red flow would use as many processing Nodes as there are channels in the server's data package
To quote the Node-Red documentation on config-nodes:
A common use of config nodes is to represent a shared connection to a
remote system. In that instance, the config node may also be
responsible for creating the connection and making it available to the
nodes that use the config node. In such cases, the config node should
also handle the close event to disconnect when the node is stopped.
As far as I understood this, I make the connection available via this.serverConfig = RED.nodes.getNode(this.server); but I cannot figure out how to pass data, which is received by this connection, to the node that is using this connection.
A node has no knowledge of what nodes it is connected to downstream.
The best you can do from the first node is to have 2 outputs and to send digital to one and analogue to the other.
You would do this by passing an array to the node.send() function.
E.g.
//this sends output to just the first output
node.sent([msg,null]);
//this sends output to just the second output
node.send([null,msg]);
Nodes that have receive messagess need to add a listener for input
e.g.
node.on('input', function(msg) {
...
});
All of this is well documented on the Node-RED page
The other option is if the udpServer node is a config node then you need to implement your own listeners, best bet is to look something like the MQTT nodes in core for examples of pooling connections

NodeJs: Never emits "end" when reading a TCP Socket

I am pretty new to Node.Js and I'm using tcp sockets to communicate with a client. Since the received data is fragmented I noticed that it prints "ondata" to the console more than once. I need to be able to read all the data and concatenate it in order to implement the other functions. I read the following http://blog.nodejs.org/2012/12/20/streams2/ and thought I can use socket.on('end',...) for this purpose. But it never prints "end" to the console.
Here is my code:
Client.prototype.send = function send(req, cb) {
var self = this;
var buffer = protocol.encodeRequest(req);
var header = new Buffer(16);
var packet = Buffer.concat([ header, buffer ], 16 + buffer.length);
function cleanup() {
self.socket.removeListener('data', ondata);
self.socket.removeListener('error', onerror);
}
var body = '';
function ondata() {
var chunk = this.read() || '';
body += chunk;
console.log('ondata');
}
self.socket.on('readable', ondata);
self.socket.on('end', function() {
console.log('end');
});
function onerror(err) {
cleanup();
cb(err);
}
self.socket.on('error', onerror);
self.socket.write(packet);
};
The end event will handle the FIN package of the TCP protocol (in other words: will handle the close package)
Event: 'end'#
Emitted when the other end of the socket sends a FIN packet.
By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.
About FIN package: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination
The solution
I understand your problem, the network communication have some data transfer gaps and it split your message in some packages. You just want read your fully content.
For solve this problem i will recommend you create a protocol. Just send a number with the size of your message before and while the size of your concatenated message was less than total of your message size, keep concatenating :)
I have created a lib yesterday to simplify that issue: https://www.npmjs.com/package/node-easysocket
I hope it helps :)

How to process a net.Stream using node.js?

I am trying to learn about streams in node.js!
server.js
var net = require("net");
var server = net.createServer(function(conn) {
conn.write("welcome!");
# echo the user input!
conn.pipe(conn);
});
server.listen("1111", function() {
console.log("port 1111 opened");
});
telnet test
The server currently echos the user's input
$ telnet localhost 1111
welcome!
hello
hello
desired output
To demonstrate where/how I should process the stream on the server side, I would like to wrap the user's input in {} before echoing it back
$ telnet localhost 1111
welcome!
hello
{hello}
This will basically accomplish the exact output you've requested:
var net = require('net');
var server = net.createServer(function(c) {
c.setEncoding('utf8');
c.on('data', function(d) {
c.write('{' + d.trim() + '}\n');
});
});
server.listen(9871);
First let me call your attention to c.setEncoding('utf8'). This will set a flag on the connection that will automatically convert the incoming Buffer to a String in the utf8 space. This works well for your example, but just note that for improved performance between Sockets it would be better to perform Buffer manipulations.
Simulating the entirety of .pipe() will take a bit more code.
.pipe() is a method of the Stream prototype, which can be found in lib/stream.js. If you take a look at the file you'll see quite a bit more code than what I've shown above. For demonstration, here's an excerpt:
function ondata(chunk) {
if (dest.writable) {
if (false === dest.write(chunk) && source.pause) {
source.pause();
}
}
}
source.on('data', ondata);
First a check is made if the destination is writable. If not, then there is no reason to attempt writing the data. Next the check if dest.write === false. From the documentation:
[.write] returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.
Since Streams live in kernel space, outside of the v8 memory space, it is possible to crash your machine by filling up memory (instead of just crashing the node app). So checking if the message has drained is a safety prevention mechanism. If it hasn't finished draining, then the source will be paused until the drain event is emitted. Here is the drain event:
function ondrain() {
if (source.readable && source.resume) {
source.resume();
}
}
dest.on('drain', ondrain);
Now there is a lot more we could cover with how .pipe() handles errors, cleans up its own event emitters, etc. but I think we've covered the basics.
Note: When sending a large string, it is possible that it will be sent in multiple packets. For this reason it may be necessary to do something like the following:
var net = require('net');
var server = net.createServer(function(c) {
var tmp = '';
c.setEncoding('utf8');
c.on('data', function(d) {
if (d.charCodeAt(d.length - 1) !== 10) {
tmp += d;
} else {
c.write('{' + tmp + d.trim() + '}\n');
tmp = '';
}
});
});
server.listen(9871);
Here we use the assumption that the string is ended by the new line character (\n, or ascii character code 10). We check the end of the message to see if this is the case. If not, then we temporarily store the message from the connection until the new line character is received.
This may not be a problem for your application, but thought it would be worth noting.
you can do something like
conn.on 'data', (d) ->
conn.write "{#{d}}"
the .pipe method is basically just attaching the data event of the input stream to write to the output stream
I'm not sure about net() actually, but I imagine it's quite similar to http:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/event-stream'});
http.get(options, function(resp){
resp.on('data', function(chunk){
res.write("event: meetup\n");
res.write("data: "+chunk.toString()+"\n\n");
});
}).on("error", function(e){
console.log("Got error: " + e.message);
});
});
https://github.com/chovy/nodejs-stream

Socket.io: Connect from one server to another

I'm trying to make a nodejs(socket.io) server to communicate with another one.
So the client emits an event to the 'hub' server and this server emits an event to some second server for processing the action.
I tried to do:
var io_client = require( 'socket.io-client' );
and then,
io_client.connect( "second_server_host" );
it seems to work for connection but you can't do anything with this:
debug - set close timeout for client 15988842591410188424
info - socket error Error: write ECONNABORTED
at errnoException (net.js:642:11)
at Socket._write (net.js:459:18)
at Socket.write (net.js:446:15)
I guess I'm doing it wrong and missing something obvious.
Any suggestions?
Just came across this question, and another just like it with a much better answer.
https://stackoverflow.com/a/14118102/1068746
You can do server to server. The "client" code remains the same as if it was on the browser. Amazing isn't it?
I just tried it myself, and it works fine..
I ran 2 servers - using the same exact code - once on port 3000 as server, and another on port 3001 as client. The code looks like this:
, io = require('socket.io')
, ioClient = require('socket.io-client')
....
if ( app.get('port') == 3000 ){
io.listen(server).sockets.on('connection', function (socket) {
socket.on('my other event', function (data) {
console.log(data);
});
});
}else{
function emitMessage( socket ){
socket.emit('my other event', { my: 'data' });
setTimeout(function(){emitMessage(socket)}, 1000);
}
var socket = ioClient.connect("http://localhost:3000");
emitMessage(socket);
}
And if you see on the server side a "{my:data}" print every second, everything works great. Just make sure to run the client (port 3001) after the server (port 3000).
For anyone searching for a short working example, see below. This example works with socket.io#0.9.16 and socket.io-client#0.9.16.
var port = 3011;
var server = require( 'http' ).createServer( ).listen( port, function () {
console.log( "Express server listening on port " + port );
} );
var io = require( 'socket.io' ).listen( server ).set( "log level", 0 );
io.sockets.on( "connection", function ( socket ) {
console.log( 'Server: Incoming connection.' );
socket.on( "echo", function ( msg, callback ) {
callback( msg );
} );
} );
var ioc = require( 'socket.io-client' );
var client = ioc.connect( "http://localhost:" + port );
client.once( "connect", function () {
console.log( 'Client: Connected to port ' + port );
client.emit( "echo", "Hello World", function ( message ) {
console.log( 'Echo received: ', message );
client.disconnect();
server.close();
} );
} );
For Server to Server or App to App communication, I think you should look into Redis Pub-sub. Its capable of very good speeds, and can handle the entire message queuing architecture of a big app.
Here is a slightly complex but quite understandable example for using Redis Pub Sub:
Redis Pub Sub Example
For anyone looking to do this on a MeteorJS app, I created a new Meteor package joncursi:socket-io-client to solve this problem. Please see https://atmospherejs.com/joncursi/socket-io-client for more detail and example usage. Since I've bundled the NPM binaries into a package for you, you don't have to worry about installing NPM packages, declaring NPM.require() dependencies, etc. And best of all, you can deploy to .meteor.com without a hitch.
ECONNABORTED means that the connection have been closed by "the other side".
For example, lets say we have two programs, A and B. Program A connects to program B, and they start to send data back and forth. Program B closes the connection for some reason. After the connection was closed program A tries to write to program B, but since the connection is closed program A will get the error ECONNABORTED.
One of your programs have closed the connection, and the other doesn't know about it and tries to write to the socket, resulting in an error.
The native Node TCP module is probably what you want - I wanted to do what you're trying to do but it seems that the fact that WebSockets are strictly many-browser to server, or browser to many-server.
You can weave a tcp strategy into your websocket logic.
Using tcp:
var net = require('net');
var tcp = net.connect({port: 3000, host: 'localhost'});
tcp.on('connect', function(){
var buffer = new Buffer(16).fill(0);
buffer.write('some stuff');
tcp.write(buffer);
});
tcp.on('data', function(data){console.log('data is:', data)});
tcp.on('end', cb);
tcp.on('error', cb);
I would use a bridge pattern with this:
https://www.google.com/search?q=javascript+bridge+pattern&aq=f&oq=javascript+bridge+pattern&aqs=chrome.0.57.6617&sourceid=chrome&ie=UTF-8
OR, use the Node Module https://npmjs.org/package/ws-tcp-bridge
I also heard that using redis can be quite helpful - Socket.io uses this as a fallback.
Hope this helps...
Cheers

Resources