I'm trying to make a server based on the net module. what I don't understand is on which event I'm supposed to put the response code:
on(data,function()) could still be in the middle of receiving more data from the stream (so it might be to early to reply)
and on(end,function()) is after the connection is closed .
thank you for your help
The socket event ('data'), calls the callback function every time an incoming data buffer is ready for reading,, and the event emits the socket buffer of data,,
so use this,,
socket.on('data',function(data){
// Here is the function to detect the real data in stream
});
this can help for node v0.6.5, http://nodejs.org/docs/v0.6.5/api/net.html#event_data_
and this for clear understanding for the Readable streames,
http://nodejs.org/docs/v0.6.5/api/streams.html#readable_Stream
Related
I see many examples online using either 'data', 'open' and 'readable'. All seem to accomplish the same goal of streaming the input data / chunking input data. Why the variation and what's the exact differences between each event and when to use which for reading data?
Simple code examples:
readStream.on('open', function () {
// This just pipes the read stream to the response object (which goes to the client)
readStream.pipe(res);
});
readerStream.on('data', function(chunk) {
data += chunk;
});
From node.js documentation:
Event data:
The data event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer. This may occur whenever the stream is switched in flowing mode by calling readable.pipe(), readable.resume(), or by attaching a listener callback to the data event. The 'data' event will also be emitted whenever the readable.read() method is called and a chunk of data is available to be returned.
Attaching a data event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.
Find more about: https://nodejs.org/api/stream.html#class-streamreadable
I'm trying to convert Node's sockets into streams using RxJS. The goal is to have each socket create it's own stream and have all streams merge into one. As new sockets connect, a stream is created with socketStream = Rx.Observable.fromEvent(socket, 'message').
Then the stream is merged into a master stream with something like
mainStream = mainStream.merge(socketStream)
This appears to work fine, the problem is that after 200-250 client connections, the server throws RangeError: Maximum call stack size exceeded.
I have sample server and client code that demonstrates this behavior on a gist here:
Sample Server and Client
I suspect that as clients connect/disconnect, the main stream doesn't get cleaned up properly.
The problem is that you are merging your Observable recursively. Every time you do
cmdStream = cmdStream.merge(socketStream);
You are creating a new MergeObservable/MergeObserver pair.
Taking a look at the source, you can see that what you are basically doing with each subscription is subscribing to each of your previous streams in sequence so it shouldn't be hard to see that at around 250 connections your call stack is probably at least 1000 calls deep.
A better way to approach this would be to convert use the flatMap operator and think of your connections as creating an Observable of Observables.
//Turn the connections themselves into an Observable
var connections = Rx.Observable.fromEvent(server, 'connection',
socket => new JsonSocket(socket));
connections
//flatten the messages into their own Observable
.flatMap(socket => {
return Rx.Observable.fromEvent(socket.__socket, 'message')
//Handle the socket closing as well
.takeUntil(Rx.Observable.fromEvent(socket.__socket, 'close'));
}, (socket, msg) => {
//Transform each message to include the socket as well.
return { socket : socket.__socket, data : msg};
})
.subscribe(processData, handleError);
The above I haven't tested but should fix your SO error.
I would probably also question the overall design of this. What exactly are you gaining by merging all the Observables together? You are still differentiating them by passing the socket object along with the message so it would seem these could all be distinct streams.
I'm using node and socket io to stream twitter feed to the browser, but the stream is too fast. In order to slow it down, I'm attempting to use setInterval, but it either only delays the start of the stream (without setting evenly spaced intervals between the tweets) or says that I can't use callbacks when broadcasting. Server side code below:
function start(){
stream.on('tweet', function(tweet){
if(tweet.coordinates && tweet.coordinates != null){
io.sockets.emit('stream', tweet);
}
});
}
io.sockets.on("connection", function(socket){
console.log('connected');
setInterval(start, 4000);
});
I think you're misunderstanding how .on() works for streams. It's an event handler. Once it is installed, it's there and the stream can call you at any time. Your interval is actually just making things worse because it's installing multiple .on() handlers.
It's unclear what you mean by "data coming too fast". Too fast for what? If it's just faster than you want to display it, then you can just store the tweets in an array and then use timers to decide when to display things from the array.
If data from a stream is coming too quickly to even store and this is a flowing nodejs stream, then you can pause the stream with the .pause() method and then, when you're able to go again, you can call .resume(). See http://nodejs.org/api/stream.html#stream_readable_pause for more info.
Is there a way to control each step of a tcp socket write to know the server side progress of a large sized image data transfer?
At worst, how to alter the main node bin folder to add this event?
Finally, can someone explain to me why the max length of node.js http tcp sockets are 1460?
From the documentation:
socket.write(data, [encoding], [callback])
Sends data on the socket.
The second parameter specifies the encoding in the case of a
string--it defaults to UTF8 encoding.
Returns true if the entire data was flushed successfully to the kernel
buffer. Returns false if all or part of the data was queued in user
memory. 'drain' will be emitted when the buffer is again free.
The optional callback parameter will be executed when the data is
finally written out - this may not be immediately.
Event: 'drain'
Emitted when the write buffer becomes empty. Can be
used to throttle uploads.
See also: the return values of socket.write()
So, you can either specify a callback to be called once the data is flushed, or you can register a "drain" event handler on the socket. This doesn't actually tell you about the progress on the other side, but a properly-implemented server will send appropriate notifications when its queue is full, which will trigger the events on the Node.js side.
I'm writing a small node.js application that receives a multipart POST from an HTML form and pipes the incoming data to Amazon S3. The formidable module provides the multipart parsing, exposing each part as a node Stream. The knox module handles the PUT to s3.
var form = new formidable.IncomingForm()
, s3 = knox.createClient(conf);
form.onPart = function(part) {
var put = s3.putStream(part, filename, headers, handleResponse);
put.on('progress', handleProgress);
};
form.parse(req);
I'm reporting the upload progress to the browser client via socket.io, but am having difficulty getting these numbers to reflect the real progress of the node to s3 upload.
When the browser to node upload happens near instantaneously, as it does when the node process is running on the local network, the progress indicator reaches 100% immediately. If the file is large, i.e. 300MB, the progress indicator rises slowly, but still faster than our upstream bandwidth would allow. After hitting 100% progress, the client then hangs, presumably waiting for the s3 upload to finish.
I know putStream uses Node's stream.pipe method internally, but I don't understand the detail of how this really works. My assumption is that node gobbles up the incoming data as fast as it can, throwing it into memory. If the write stream can take the data fast enough, little data is kept in memory at once, since it can be written and discarded. If the write stream is slow though, as it is here, we presumably have to keep all that incoming data in memory until it can be written. Since we're listening for data events on the read stream in order to emit progress, we end up reporting the upload as going faster than it really is.
Is my understanding of this problem anywhere close to the mark? How might I go about fixing it? Do I need to get down and dirty with write, drain and pause?
Your problem is that stream.pause isn't implemented on the part, which is a very simple readstream of the output from the multipart form parser.
Knox instructs the s3 request to emit "progress" events whenever the part emits "data". However since the part stream ignores pause, the progress events are emitted as fast as the form data is uploaded and parsed.
The formidable form, however, does know how to both pause and resume (it proxies the calls to the request it's parsing).
Something like this should fix your problem:
form.onPart = function(part) {
// once pause is implemented, the part will be able to throttle the speed
// of the incoming request
part.pause = function() {
form.pause();
};
// resume is the counterpart to pause, and will fire after the `put` emits
// "drain", letting us know that it's ok to start emitting "data" again
part.resume = function() {
form.resume();
};
var put = s3.putStream(part, filename, headers, handleResponse);
put.on('progress', handleProgress);
};