I'm a total beginner in Using Node-Red and nodeJS. Im trying to write nodejs code in an "daemon node" for handling my payload before sending it by MQTT to my nodejs Server. My Problem is: I can't get my payload from stdin.
I tried everything i found online about reading from stdin, but I didn't find a solution.
"use strict"
let mqtt = require("mqtt");
let client = mqtt.connect("mqtt://192.168.178.36");
let obj = process.stdin;
console.log(obj);
client.on('connect', () => {
console.log("Sending...")
client.publish("test/reader01", "Reader01: " + (new Date()).toString() + "\n" + obj);
client.end();
});
The program you can see here sends the aktual Date to the server an prints out a string including a net.socket object on the console, but i can't get the payload from my stream.
Related
I'm writing a application using sockets, but can't seem to get the initial handshake to work. I'm using WebSockets + React on my front-end, running on PORT 8080, and Node.js socket on the backend running on PORT 5000.
The front-end handshake is done through my component like so:
componentDidMount(){
this.socket = new WebSocket('ws://localhost:5000', ['json']);
this.socket.onerror = err => {
console.log(err)
}
this.socket.onmessage = e => {
let res = JSON.parse(e.data);
console.log(e, res);
let copyArr = [...this.state.message]
copyArr.push(res);
this.setState({
message: copyArr
});
}
}
On my Node server, I do:
const server = http.createServer();
server.on('upgrade', (req, socket) => {
if(req.headers['upgrade'] !== "websocket"){
socket.end('HTTP/1.1 400 Bad Request');
return;
}
const acceptKey = req.headers['sec-websocket-key'];
const acceptHash = generateValue(acceptKey);
console.log('accepkey', acceptKey, 'hash', acceptHash);
const resHeaders = [ 'HTTP/1.1 101 Web Socket Protocol Handshake', 'Upgrade: WebSocket', 'Connection: Upgrade', `Sec-WebSocket-Accept: ${acceptHash}` ];
console.log(resHeaders);
let protocols = req.headers['sec-websocket-protocol'];
protocols = !protocols ? [] : protocols.split(',').map(name => name.trim());
if(protocols.includes('json')){
console.log('json here');
resHeaders.push(`Sec-WebSocket-Protocol: json`);
}
socket.write(resHeaders.join('\r\n') + '\r\n\r\n');
})
function generateValue(key){
return crypto
.createHash('sha1')
.update(key + '258EAFA5-E914–47DA-95CA-C5AB0DC85B11', 'binary')
.digest('base64');
}
When my React component mounts, it tries to establish the initial handshake but fails with the error: WebSocket connection to 'ws://localhost:5000/' failed: Error during WebSocket handshake: Incorrect 'Sec-WebSocket-Accept' header value. I've checked using Chrome developer tool and found this
While on the backend, when logging the request accept-key header, and response headers, I saw this:
So, unless I'm mistaken about these headers, it seems that the request and response accept-key header somehow changes when making it's way from the client to the server, and vice versa. How is this happening? Or have I misunderstood what's going on. Why exactly is the initial handshake not working?
There is a en dash – instead of hyphen - in 258EAFA5-E914–47DA-95CA-C5AB0DC85B11 after E914
So replace it with hyphen -
reference https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Sec-WebSocket-Accept
I believe the generateValue function is wrong, you pass binary as the inputData encoding which is a keyword for latin1 according to the docs. But I believe it is UTF-8 string, not latin1, so the result hash is wrong. So just try to use update(key + '258EAFA5-E914–47DA-95CA-C5AB0DC85B11', 'utf8') or even without the second utf8 argument since it is a default.
I'm developing a Web application to send images, videos, etc. to two monitors from an admin interface. I'm using ws in Node.js for the server side. I've implemented selecting images available on the server and external URLs and sending them to the clients, but I also wanted to be able to directly send images selected from the device with a file input. I managed to do it using base64 but I think it's pretty inefficient.
Currently I send a stringified JSON object containing the client to which the resource has to be sent, the kind of resource and the resource itself, parse it in the server and send it to the appropriate client. I know I can set the Websocket binaryType to blob and just send the File object, but then I'd have no way to tell the server which client it has to send it to. I tried using typeson and BSON to accomplish this, but it didn't work.
Are there any other ways to do it?
You can send raw binary data through the WebSocket.
It's quite easy to manage.
One option is to prepend a "magic byte" (an identifier that marks the message as non-JSON). For example, prepend binary messages with the B character.
All the server has to do is test the first character before collecting the binary data (if the magic byte isn't there, it's probably the normal JSON message).
A more serious implementation will attach a header after the magic byte (i.e., file name, total length, position of data being sent etc').
This allows the upload to be resumed on disconnections (send just the parts that weren't acknowledged as received.
Your server will need to split the data into magic byte, header and binary_data before processing. but it's easy enough to accomplish.
Hope this help someone.
According to socket.io document you can send either string, Buffer or mix both of them
On Client side:
function uploadFile(e, socket, to) {
let file = e.target.files[0];
if (!file) {
return
}
if (file.size > 10000000) {
alert('File should be smaller than 1MB')
return
}
var reader = new FileReader();
var rawData = new ArrayBuffer();
reader.onload = function (e) {
rawData = e.target.result;
socket.emit("send_message", {
type: 'attachment',
data: rawData
} , (result) => {
alert("Server has received file!")
});
alert("the File has been transferred.")
}
reader.readAsArrayBuffer(file);
}
on server side:
socket.on('send_message', async (data, cb) => {
if (data.type == 'attachment') {
console.log('Found binary data')
cb("Received file successfully.")
return
}
// Process other business...
});
I am using pure WebSocket without io, where you cannot mix content - either String or Binary. Then my working solution is like this:
CLIENT:
import { serialize } from 'bson';
import { Buffer } from 'buffer';
const reader = new FileReader();
let rawData = new ArrayBuffer();
ws = new WebSocket(...)
reader.onload = (e) => {
rawData = e.target.result;
const bufferData = Buffer.from(rawData);
const bsonData = serialize({ // whatever js Object you need
file: bufferData,
route: 'TRANSFER',
action: 'FILE_UPLOAD',
});
ws.send(bsonData);
}
Then on Node server side, the message is catched and parsed like this:
const dataFromClient = deserialize(wsMessage, {promoteBuffers: true}) // edited
fs.writeFile(
path.join('../server', 'yourfiles', 'yourfile.txt'),
dataFromClient.file, // edited
'binary',
(err) => {
console.log('ERROR!!!!', err);
}
);
The killer is promoteBuffer option in deserialize function.
I'm trying to work around a problem to do with rest streaming between the Nest API and a service (ST) that does not support streaming.
To get around this, I have built a service on Sails which takes a post request from ST containing the Nest Token, and then triggers an EventSource event listener that sends the data back to ST.
It is heavily based off the Nest rest-streaming example here:
https://github.com/nestlabs/rest-streaming and my code is as follows:
startStream: function(req, res) {
var nestToken = req.body.nestToken,
stToken = req.body.stToken,
endpointURL = req.body.endpointURL,
source = new EventSource(sails.config.nest.nest_api_url + '?auth=' + nestToken);
source.addEventListener('put', function(e) {
var d = JSON.parse(e.data);
var data = { devices: d.data.devices, structures: d.data.structures},
config = { headers : {'Authorization': 'Bearer ' + stToken}};
sendData(endpointURL, data, config);
});
source.addEventListener('open', function(e) {
console.log("Connection opened");
});
source.addEventListener('auth_revoked', function(e){
console.log("Auth token revoed");
});
source.addEventListener('error', function(e) {
if (e.readyState == EventSource.CLOSED) {
console.error('Connection was closed! ', e);
} else {
console.error('An unknown error occurred: ', e);
}
}, false);
}
};
The problem I foresee though is that once a request is received by the node server, it start the event listener, however I cannot for the life of me figure out how I can kill the event listener.
If I cannot figure out a way to stop this, then every EventListener will run indefinitely which is obviously not suitable.
Has anyone got any suggestions on how to overcome the issue?
Each SSH client connection is a dedicated socket.
If a particular client doesn't want event streaming, don't make the connection. If they start event streaming, but want to turn it off, call source.close();source=NULL;
If from server-side you want to stop sending the messages, close the socket.
You didn't show the server-side code, but if it is running a dedicated process per SSE client then you just exit the process. If you are maintaining a list of sockets, one per connected client, close the socket. On node.js you might be running a function on setInterval. To close the connection you do and clearInterval() and response.end();.
I have a NodeJS API web server (let's call it WS1) that receives RESTful HTTP requests from clients, and to respond needs to first query another local server (let's call it WS2).
The flow is pretty much like this:
WS1 receives an HTTP request from a client and parses it.
WS1 sends a request to WS2 for some information.
When WS1 receives the response from WS2, it finishes processing the original request and sends a response back to the client.
Until now all communication between WS1 and WS2 has been done through HTTP requests, since the two machines are on the same local network.
To speed things up though I'm considering to start using zmq instead. I've looked at the patterns they show on the docs, but still haven't figured out a concurrency problem.
WS1 can send many requests per second to WS2, and there's no guarantee that WS2 replies in the same order as it receives the requests, since some async operations can internally take longer than others.
So, using zmq with NodeJS, how do I make sure that when WS1 receives a message from WS2 it knows to what original client request it belongs to? Is there a built-in mechanism to take care of it?
Thanks!
0MQ is an interesting tool set that helps abstract socket communication. There are mechanism (should you choose the correct socket types) that allow the server to respond to the right client, and it is handled within the confines of 0mq.
The basic API types are:
PUSH-PULL
PUB-SUB
REQUEST-REPLY
IF you want to be able to have one machine respond to the originator, then I believe you want REQ-REP api type.
then you need to consider the multi-plexing on each side to get the connectors correct. But keep it one to one for simplicity sake at first:
Sample Client (from http://zguide.zeromq.org/js:rrclient
// Hello World client in Node.js
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
var zmq = require('zmq')
, requester = zmq.socket('req');
requester.connect('tcp://localhost:5559');
var replyNbr = 0;
requester.on('message', function(msg) {
console.log('got reply', replyNbr, msg.toString());
replyNbr += 1;
});
for (var i = 0; i < 10; ++i) {
requester.send("Hello");
}
sample server (from http://zguide.zeromq.org/js:rrserver)
// Hello World server in Node.js
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
var zmq = require('zmq')
, responder = zmq.socket('rep');
responder.connect('tcp://localhost:5560');
responder.on('message', function(msg) {
console.log('received request:', msg.toString());
setTimeout(function() {
responder.send("World");
}, 1000);
});
The routing of the reply back to the client is handled automatically by 0MQ. it is part of the message (although I don't remember if you see the address buffer in these examples - it maybe abstracted away). Here is what the request envelope looks like:
it is the first frame, which allows 0MQ to be able to reply to the correct client.
Once that is running you can then consider 1..* *..1 and ... All it really does is require you to change the socket types to DEALER and ROUTER where appropriate.
I ended up implementing some sort of "middleware" to support this functionality with zmq.
In the example below for simplicity I've used Express with Node >= v4.0.0 (supporting native JS promises), but you can obviously substitute it with any HTTP server you like (these days I prefer Koa) and promises library you prefer. This is the code for the two servers.
WS1 (requester)
var zmq = require('zmq');
var mem = {};
var requester = zmq.socket('req');
requester.on("message", function(reply) {
reply = reply.toString().split('*');
mem[reply.pop()](reply);
});
requester.connect("tcp://localhost:5555");
var app = require('express')();
app.get('/', function (req, res) {
var id = Date.now() + Math.random();
new Promise(function (resolve, reject) {
mem[id] = function (reply) {
reply[0] === 'success' ? resolve(reply[1]) : reject(reply[1]);
}
})
.then(function (data) {
res.send(data);
})
.catch(function (err) {
console.log(err);
res.send(500);
})
requester.send(id + '*' + message);
});
var server = app.listen(3000);
WS2 (responder)
var zmq = require('zmq');
var responder = zmq.socket('rep');
responder.on('message', function(message) {
message = message.split('*');
var reqId = message[0];
// Do whatever async stuff you need with message[1]
// Then at the end of your callbacks you'll have something like this
if (err) {
responder.send('err' + '*' + JSON.stringify(err) + '*' + reqId);
} else {
responder.send('success' + '*' + JSON.stringify(yourData) + '*' + reqId);
}
});
responder.bind('tcp://*:5555');
Good Evening,
I'm currently testing AS3+NodeJS communication to delve into multiplayer games. I'm currently very experienced in Flash, but pretty new to NodeJS.
The problem I have is that the data that Node sends is different to what Flash Receives.
Take the following working NodeJS Code (I'm not asking if it is right or wrong or for "best practice" - im testing different things out). Look specifically at the 'clients' object and the "data" event handler:
var net = require('net');
var mySocket;
var clients = {
'0': 'myTest'
};
var server = net.createServer(function(socket) {
mySocket = socket;
mySocket.on("data", function(data){
myData = data + " -- " + clients[0];
console.log("Data=" + myData);
mySocket.write(myData);
});
});
server.listen(3000, "127.0.0.1");
When "hello world" data is sent to the server the expected output is this:
Console:
"Data=hello world--myTest"
Flash:
"Data Received: [hello world--myTest]"
The console outputs the information that i am expecting, however the Flash outputs :-
"Data Received: [ -- myTesthello world]"
A few snippets from my AS3 Connection Class is below :-
public function createConnection():void{
this.currStatus = "Connecting..";
this.mySocket = new XMLSocket("localhost",3000);
this.mySocket.addEventListener(DataEvent.DATA, onReceiveData);
For handling data that is received, we just trace it for now:
private function onReceiveData(evt:DataEvent):void{
//We have recieved some data from the server. Act upon it..
//not sure yet what it will do with the data.. just trace for now.
trace("Data Received: [" + evt.data + "]");
}
If anyone can point out why the data is in a different order when received in Flash, it would be a good learning point. As I said im very new to NodeJS so there may be something i am missing (I am aware there is no .on"connect".... took it out to test w/o it).
Thanks in advance.