how do I handle the CONTROL+C input in a node.js TCP server?
var server = net.createServer(function(c) {
c.on('end', function() {
console.log('Client disconnected');
});
c.on('data', function(data) {
if (data == "CONTROL+C") { // Here is the check
c.destroy();
}
});
}).listen(8124);
Control-C is a single byte, 0x03 (using an ASCII chart is kinda helpful).
However, whenever you're dealing with a socket connection you have to remember that you're going to receive data in a "chunked" fashion and the chunking does not necessarily correspond to the way the data was sent; you cannot assume that one send call on the client side corresponds to a single chunk on the server side. Therefore you can't assume that if the client sends a Control-C, it will be the only thing you receive in your data event. Some other data might come before it, and some other data might come after it, all in the same event. You will have to look for it inside your data.
From ebohlman's answer. It work.
c.on('data', function(data) {
if (data.toString().charCodeAt(0) === 3) {
c.destroy();
}
});
Related
Is it possible to send an event to node js without any data?
Here's what I am trying to do:
Client:
socket.emit('logged out');
Server:
socket.on('logged out', function() {
console.log('User is logged out');
delUser();
sendUsersOnline();
});
The client side is deffinetely being run, but I never get the server side fired. I'm not sure why. It may be because I'm not sending any data?
EDIT:
function delUser()
{
for(i in usersOnline) {
var user = usersOnline[i];
if(user.socket_id == socket.id) {
delete usersOnline[i];
usersTyping.splice(usersTyping.indexOf(myUser.id), 1);
console.log('Disconnected: ' + myUser.display_name + '(' + myUser.id + ')');
}
}
}
function sendUsersOnline()
{
io.emit('users online', usersOnline);
}
According to Socket.io 2.0.3 docs :
socket.emit(eventName[, ...args][, ack])
socket.emit('hello', 'world');
socket.emit('with-binary', 1, '2', { 3:'4', 5: new Buffer(6) });
Emits an event to the socket identified by the string name. Any other
parameters can be included. All serializable datastructures are
supported, including Buffer.
The ack argument is optional and will be called with the server
answer.
It says that any other parameters can be included but it is not necessary. So you can definitely send/emit an event without any data from server/client like this
socket.emit('logged out');
And you can receive any event without data on server/client like this
socket.on('logged out', function(){
// Code to execute in response to this event
});
I had recently used this type of events to send signals from peer-to-peer without any data being sent with it and it works perfectly. Basically one peer(client) send signal to server and than server gets that signal and emits that signal to other peer(client) using broadcast.
I am pretty new to Node.Js and I'm using tcp sockets to communicate with a client. Since the received data is fragmented I noticed that it prints "ondata" to the console more than once. I need to be able to read all the data and concatenate it in order to implement the other functions. I read the following http://blog.nodejs.org/2012/12/20/streams2/ and thought I can use socket.on('end',...) for this purpose. But it never prints "end" to the console.
Here is my code:
Client.prototype.send = function send(req, cb) {
var self = this;
var buffer = protocol.encodeRequest(req);
var header = new Buffer(16);
var packet = Buffer.concat([ header, buffer ], 16 + buffer.length);
function cleanup() {
self.socket.removeListener('data', ondata);
self.socket.removeListener('error', onerror);
}
var body = '';
function ondata() {
var chunk = this.read() || '';
body += chunk;
console.log('ondata');
}
self.socket.on('readable', ondata);
self.socket.on('end', function() {
console.log('end');
});
function onerror(err) {
cleanup();
cb(err);
}
self.socket.on('error', onerror);
self.socket.write(packet);
};
The end event will handle the FIN package of the TCP protocol (in other words: will handle the close package)
Event: 'end'#
Emitted when the other end of the socket sends a FIN packet.
By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.
About FIN package: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination
The solution
I understand your problem, the network communication have some data transfer gaps and it split your message in some packages. You just want read your fully content.
For solve this problem i will recommend you create a protocol. Just send a number with the size of your message before and while the size of your concatenated message was less than total of your message size, keep concatenating :)
I have created a lib yesterday to simplify that issue: https://www.npmjs.com/package/node-easysocket
I hope it helps :)
A net.Socket object in NodeJS is a Readable Stream, however one note in the docs got me concerned:
For the Net.Socket 'data' event, the docs say
Note that the data will be lost if there is no listener when a Socket emits a 'data' event.
That seems to imply a Socket is returned to the calling script in "flowing-mode" and already un-paused? However, for a generic Readable Stream, the documentation for the 'data' event says
If you attach a data event listener, then it will switch the stream into flowing mode, and data will be passed to your handler as soon as it is available.
That "If" seems to imply if you wait a bit to bind to the 'data' event, the stream will wait for you, and if you intentionally want to miss the 'data' events, the example in the resume() method seems to indicate you must call the resume() method to start the flow of data.
My concern is that when working with a net.Server, when you receive a net.Socket as part of a 'connection' event, is it imperative that you start handling the 'data' events right away since it's already opened? Meaning if I do:
var s = new net.Server();
s.on('connection', function(socket) {
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
});
s.listen(8080);
Meaning not bind to the 'data' event right away, I could lose data? So is this a more robust way to handle incoming connections if you have a lengthy setup required for each one?
var s = new net.Server();
s.on('connection', function(socket) {
socket.pause(); // Not ready for you yet!
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
socket.resume(); // Okay, go!
});
s.listen(8080);
Anyone have experience working with listening on raw socket streams to know if this data loss is an issue?
I'm hoping this is an instance where the Net.Socket documentation wasn't updated since v0.10, since the stream documentation has a section that mentions 'data' events started emitting right away in versions prior to 0.10. Were TCP sockets properly updated to not start emitting 'data' packets right away, and the documentation not updated appropriately?
Yes, this is the docs flaw. Here is an example:
var net = require('net')
var server = net.createServer(onConnection)
function onConnection (socket) {
console.log('onConnection')
setTimeout(startReading, 1000)
function startReading () {
socket.on('data', read)
socket.on('end', stopReading)
}
function stopReading () {
socket.removeListener('data', read)
socket.removeListener('end', stopReading)
}
}
function read (data) {
console.log('Received: ' + data.toString('utf8'))
}
server.listen(1234, onListening)
function onListening () {
console.log('onListening')
net.connect(1234, onConnect)
}
function onConnect () {
console.log('onConnect')
this.write('1')
this.write('2')
this.write('3')
this.write('4')
this.write('5')
this.write('6')
}
All the data is received. If you explicitly resume() socket, you will lose it.
Also, if you do your "lengthy" setup in a blocking manner (which you shouldn't) you can't lose any IO as it has no chance to be processed, so no events will be emitted.
I have Node server which use Express as web app.
This server creates a tcp socket connection with other side TCP server.
I'm trying to pipe tcp data to the user http response.
It works fine for a while, but the LAST tcp packet is NOT piped to http response.
So, download status of web browser stopped as 99.9% downloaded.
My source code is below.
Anyone can help me to solve this problem?
Thanks in advance.
app.get('/download/*', function(req, res){
var tcpClient = new net.Socket();
tcpClient.connect(port, ip, function() {
// some logic
});
tcpClient.on('data', function(data) {
/* skip ... */
tcpClient.pipe(res); // This method is called once in the 'data' event loop
/* skip ... */
});
tcpClient.on('close', function() {
clog.debug('Connection closed.');
});
tcpClient.on('end', function() {
clog.debug('Connection Ended.');
});
tcpClient.on('error', function(err){
clog.err(err.stack);
});
});
That's not how you are supposed to use .pipe().
When you pipe a stream into another, you don't have to handle the data events yourself: everything is taken care of by the pipe. Moreover, the data event is emitted on every chunk of data, which means that you are possibly piping() the streams multiple times.
You only need to create and initialize the Socket, and then pipe it to your response stream:
tcpClient.connect(port, ip, function () {
// some logic
this.pipe(res);
});
Edit: As you precised in the comments, the first chunk contains metadata, and you only want to pipe from the second chunk thereon. Here's a possible solution:
tcpClient.connect(port, ip, function () {
// some logic
// Only call the handler once, i.e. on the first chunk
this.once('data', function (data) {
// Some logic to process the first chunk
// ...
// Now that the custom logic is done, we can pipe the tcp stream to the response
this.pipe(res);
});
});
As a side note, if you want to add custom logic to the data that comes from the tcpClient before writing it to the response object, check out the Transform stream. You will then have to:
create a transform stream with your custom transforming logic
pipe all streams together: tcpClient.pipe(transformStream).pipe(res).
I am trying to learn about streams in node.js!
server.js
var net = require("net");
var server = net.createServer(function(conn) {
conn.write("welcome!");
# echo the user input!
conn.pipe(conn);
});
server.listen("1111", function() {
console.log("port 1111 opened");
});
telnet test
The server currently echos the user's input
$ telnet localhost 1111
welcome!
hello
hello
desired output
To demonstrate where/how I should process the stream on the server side, I would like to wrap the user's input in {} before echoing it back
$ telnet localhost 1111
welcome!
hello
{hello}
This will basically accomplish the exact output you've requested:
var net = require('net');
var server = net.createServer(function(c) {
c.setEncoding('utf8');
c.on('data', function(d) {
c.write('{' + d.trim() + '}\n');
});
});
server.listen(9871);
First let me call your attention to c.setEncoding('utf8'). This will set a flag on the connection that will automatically convert the incoming Buffer to a String in the utf8 space. This works well for your example, but just note that for improved performance between Sockets it would be better to perform Buffer manipulations.
Simulating the entirety of .pipe() will take a bit more code.
.pipe() is a method of the Stream prototype, which can be found in lib/stream.js. If you take a look at the file you'll see quite a bit more code than what I've shown above. For demonstration, here's an excerpt:
function ondata(chunk) {
if (dest.writable) {
if (false === dest.write(chunk) && source.pause) {
source.pause();
}
}
}
source.on('data', ondata);
First a check is made if the destination is writable. If not, then there is no reason to attempt writing the data. Next the check if dest.write === false. From the documentation:
[.write] returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.
Since Streams live in kernel space, outside of the v8 memory space, it is possible to crash your machine by filling up memory (instead of just crashing the node app). So checking if the message has drained is a safety prevention mechanism. If it hasn't finished draining, then the source will be paused until the drain event is emitted. Here is the drain event:
function ondrain() {
if (source.readable && source.resume) {
source.resume();
}
}
dest.on('drain', ondrain);
Now there is a lot more we could cover with how .pipe() handles errors, cleans up its own event emitters, etc. but I think we've covered the basics.
Note: When sending a large string, it is possible that it will be sent in multiple packets. For this reason it may be necessary to do something like the following:
var net = require('net');
var server = net.createServer(function(c) {
var tmp = '';
c.setEncoding('utf8');
c.on('data', function(d) {
if (d.charCodeAt(d.length - 1) !== 10) {
tmp += d;
} else {
c.write('{' + tmp + d.trim() + '}\n');
tmp = '';
}
});
});
server.listen(9871);
Here we use the assumption that the string is ended by the new line character (\n, or ascii character code 10). We check the end of the message to see if this is the case. If not, then we temporarily store the message from the connection until the new line character is received.
This may not be a problem for your application, but thought it would be worth noting.
you can do something like
conn.on 'data', (d) ->
conn.write "{#{d}}"
the .pipe method is basically just attaching the data event of the input stream to write to the output stream
I'm not sure about net() actually, but I imagine it's quite similar to http:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/event-stream'});
http.get(options, function(resp){
resp.on('data', function(chunk){
res.write("event: meetup\n");
res.write("data: "+chunk.toString()+"\n\n");
});
}).on("error", function(e){
console.log("Got error: " + e.message);
});
});
https://github.com/chovy/nodejs-stream