I'm receiving the same message on each line event. I am receiving 1 event for each message.
Anyone know why. It seems straight forward.
var rli = require("readline").createInterface({
input: socket,
output: socket,
terminal: false
});
rli.on("line", handleServerMessage);
function handleServerMessage(msg) {
console.log("Received msg from from server. msg=", msg);
}
Related
Is it possible to send an event to node js without any data?
Here's what I am trying to do:
Client:
socket.emit('logged out');
Server:
socket.on('logged out', function() {
console.log('User is logged out');
delUser();
sendUsersOnline();
});
The client side is deffinetely being run, but I never get the server side fired. I'm not sure why. It may be because I'm not sending any data?
EDIT:
function delUser()
{
for(i in usersOnline) {
var user = usersOnline[i];
if(user.socket_id == socket.id) {
delete usersOnline[i];
usersTyping.splice(usersTyping.indexOf(myUser.id), 1);
console.log('Disconnected: ' + myUser.display_name + '(' + myUser.id + ')');
}
}
}
function sendUsersOnline()
{
io.emit('users online', usersOnline);
}
According to Socket.io 2.0.3 docs :
socket.emit(eventName[, ...args][, ack])
socket.emit('hello', 'world');
socket.emit('with-binary', 1, '2', { 3:'4', 5: new Buffer(6) });
Emits an event to the socket identified by the string name. Any other
parameters can be included. All serializable datastructures are
supported, including Buffer.
The ack argument is optional and will be called with the server
answer.
It says that any other parameters can be included but it is not necessary. So you can definitely send/emit an event without any data from server/client like this
socket.emit('logged out');
And you can receive any event without data on server/client like this
socket.on('logged out', function(){
// Code to execute in response to this event
});
I had recently used this type of events to send signals from peer-to-peer without any data being sent with it and it works perfectly. Basically one peer(client) send signal to server and than server gets that signal and emits that signal to other peer(client) using broadcast.
Here is ws library..
https://github.com/websockets/ws/blob/master/lib/WebSocket.js
Now How can I use send method such as i can detect the message sending failed? I have tried callback and try catch.. may be i am missing something..
Now I am doing this.. it can send the message..
BulletinSenderHelper.prototype.sendMessage = function(bulletin, device) {
var message = JSON.stringify({
action: 'bulletin:add',
data: bulletin.data
});
if (device.is_active) {
logger.debug('Sending message to %s', device.id);
device.conn.send(message); // device.conn == ws . though i am checking it is active or not, sometimes it fails to send the message. i have to detect it.
} else {
logger.debug('Client %s is inactive, queuing bulletin', device.id);
this.queueBulletin(device, bulletin);
}
};
I am pretty new to Node.Js and I'm using tcp sockets to communicate with a client. Since the received data is fragmented I noticed that it prints "ondata" to the console more than once. I need to be able to read all the data and concatenate it in order to implement the other functions. I read the following http://blog.nodejs.org/2012/12/20/streams2/ and thought I can use socket.on('end',...) for this purpose. But it never prints "end" to the console.
Here is my code:
Client.prototype.send = function send(req, cb) {
var self = this;
var buffer = protocol.encodeRequest(req);
var header = new Buffer(16);
var packet = Buffer.concat([ header, buffer ], 16 + buffer.length);
function cleanup() {
self.socket.removeListener('data', ondata);
self.socket.removeListener('error', onerror);
}
var body = '';
function ondata() {
var chunk = this.read() || '';
body += chunk;
console.log('ondata');
}
self.socket.on('readable', ondata);
self.socket.on('end', function() {
console.log('end');
});
function onerror(err) {
cleanup();
cb(err);
}
self.socket.on('error', onerror);
self.socket.write(packet);
};
The end event will handle the FIN package of the TCP protocol (in other words: will handle the close package)
Event: 'end'#
Emitted when the other end of the socket sends a FIN packet.
By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.
About FIN package: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination
The solution
I understand your problem, the network communication have some data transfer gaps and it split your message in some packages. You just want read your fully content.
For solve this problem i will recommend you create a protocol. Just send a number with the size of your message before and while the size of your concatenated message was less than total of your message size, keep concatenating :)
I have created a lib yesterday to simplify that issue: https://www.npmjs.com/package/node-easysocket
I hope it helps :)
I am using the amqplib node module and following the hello world send/receive tutorial.
https://github.com/squaremo/amqp.node/tree/master/examples/tutorials
My receivers/workers take that message and perform a CPU intensive task in the background, so I can only process about 5 messages at once.
What is the best way to control the number of messages that are being accepted by the receiver.
Code sample:
var amqp = require('amqplib');
amqp.connect('amqp://localhost').then(function(conn) {
process.once('SIGINT', function() { conn.close(); });
return conn.createChannel().then(function(ch) {
var ok = ch.assertQueue('hello', {durable: false});
ok = ok.then(function(_qok) {
return ch.consume('hello', function(msg) {
console.log(" [x] Received '%s'", msg.content.toString());
}, {noAck: true});
});
return ok.then(function(_consumeOk) {
console.log(' [*] Waiting for messages. To exit press CTRL+C');
});
});
}).then(null, console.warn);
You need to set the Quality Of Service on the model. Here is how you would do that in C#
var _model = rabbitConnection.CreateModel();
// Configure the Quality of service for the model. Below is how what each setting means.
// BasicQos(0="Dont send me a new message untill I’ve finshed", _fetchSize = "Send me N messages at a time", false ="Apply to this Model only")
_model.BasicQos(0, _fetchSize, false);
The QOS works with the Ack process. So until you Ack a message, it will only send you N (_fetchSize) at a time. I think you'll have to set noAck: false in your code to get this working.
Good luck!
I was using zeroMQ in nodeJS. But it seems that while sending the data from producer to worker, if I do not put it in setInterval, then it does not send the data to the worker. My example code is as follows:
producer.js
===========
var zmq = require('zmq')
, sock = zmq.socket('push');
sock.bindSync('tcp://127.0.0.1:3000');
console.log('Producer bound to port 3000');
//sock.send("hello");
var i = 0;
//1. var timer = setInterval(function() {
var str = "hello";
console.log('sending work', str, i++);
sock.send(str);
//2. clearTimeout(timer);
//3. }, 150);
sock.on('message', function(msg) {
console.log("Got A message, [%s], [%s]", msg);
});
So in the above code, if I add back the lines commented in 1, 2 and 3, then I do receive the message to the worker side, else it does not work.
Can anyone throw light why to send message I need to put it in setInterval? Or am I doing something wrong way?
The problem is hidden in the zmq bindings for node.js . I've just spent some time digging into it and it basically does this on send():
Enqueue the message
Flush buffers
Now the problem is in the flushing part, because it does
Check if the output socket is ready, otherwise return
Flush the enqueued messages
In your code, because you call bind and immediately send, there is no worker connected at the moment of the call, because they simply didn't have enough time to notice. So the message is enqueued and we are waiting for some workers to appear. Now the interesting part - where do we check for new workers? In the send function itself! So unless we call send() later, when there are actually some workers connected, our messages are never flushed and they are enqueued forever. And that is why setInterval works, because workers have enough time to notice and connect and you periodically check if there are any.
You can find the interesting part at https://github.com/JustinTulloss/zeromq.node/blob/master/lib/index.js#L277 .
Cheers ;-)