I have a Node.js application which opens a file, scans each line and makes a REST call that involves Couchbase for each line. The average number of lines in a file is about 12 to 13 million. Currently without any special settings my app can completely process ~1 million records in ~24 minutes. I went through a lot of questions, articles, and Node docs but couldn't find out any information about following:
Where's the setting that says node can open X number of http connections / sockets concurrently ? and can I change it?
I had to regulate the file processing because the file reading is much faster than the REST call so after a while there are too many open REST requests and it clogs the system and it goes out of memory... so now I read 1000 lines wait for the REST calls to finish for those and then resume it ( i am doing it using pause and resume methods on stream) Is there a better alternative to this?
What all possible optimizations can I perform so that it becomes faster than this. I know the gc related config that prevents from frequent halts in the app.
Is using "cluster" module recommended? Does it work seamlessly?
Background: We have an existing java application that does exactly same by spawning 100 threads and it is able to achieve slightly better throughput than the current node counterpart. But I want to try node since the two operations in question (reading a file and making a REST call for each line) seem like perfect situation for node app since they both can be async in node where as Java app makes blocking calls for these...
Any help would be greatly appreciated...
Generally you should break your questions on Stack Overflow into pieces. Since your questions are all getting at the same thing, I will answer them. First, let me start with the bottom:
We have an existing java application that does exactly same by spawning 100 threads ... But I want to try node since the two operations in question ... seem like perfect situation for node app since they both can be async in node where as Java app makes blocking calls for these.
Asynchronous calls and blocking calls are just tools to help you control flow and workload. Your Java app is using 100 threads, and therefore has the potential of 100 things at a time. Your Node.js app may have the potential of doing 1,000 things at a time but some operations will be done in JavaScript on a single thread and other IO work will pull from a thread pool. In any case, none of this matters if the backend system you're calling can only handle 20 things at a time. If your system is 100% utilized, changing the way you do your work certainly won't speed it up.
In short, making something asynchronous is not a tool for speed, it is a tool for managing the workload.
Where's the setting that says node can open X number of http connections / sockets concurrently ? and can I change it?
Node.js' HTTP client automatically has an agent, allowing you to utilize keep-alive connections. It also means that you won't flood a single host unless you write code to do so. http.globalAgent.maxSocket=1000 is what you want, as mentioned in the documentation: http://nodejs.org/api/http.html#http_agent_maxsockets
I had to regulate the file processing because the file reading is much faster than the REST call so after a while there are too many open REST requests and it clogs the system and it goes out of memory... so now I read 1000 lines wait for the REST calls to finish for those and then resume it ( i am doing it using pause and resume methods on stream) Is there a better alternative to this?
Don't use .on('data') for your stream, use .on('readable'). Only read from the stream when you're ready. I also suggest using a transform stream to read by lines.
What all possible optimizations can I perform so that it becomes faster than this. I know the gc related config that prevents from frequent halts in the app.
This is impossible to answer without detailed analysis of your code. Read more about Node.js and how its internals work. If you spend some time on this, the optimizations that are right for you will become clear.
Is using "cluster" module recommended? Does it work seamlessly?
This is only needed if you are unable to fully utilize your hardware. It isn't clear what you mean by "seamlessly", but each process is its own process as far as the OS is concerned, so it isn't something I would call "seamless".
By default, node uses a socket pool for all http requests and the default global limit is 5 concurrent connections per host (these are re-used for keepalive connections however). There are a few ways around this limit:
Create your own http.Agent and specify it in your http requests:
var agent = new http.Agent({maxSockets: 1000});
http.request({
// ...
agent: agent
}, function(res) { });
Change the global/default http.Agent limit:
http.globalAgent.maxSockets = 1000;
Disable pooling/connection re-use entirely for a request:
http.request({
// ...
agent: false
}, function(res) { });
Related
I had asked in an interview, are there any cases that may force you to use blocking code in a node.js server?
my answer was: I didn't ever need that in any project but I think it may be useful in some tasks that need much CPU processing like Some Image Processing or video generation.
so experts, can you correct that for me, is there any case that a blocking code would be a must?
First off, you have to distinguish between the different types of programs. A server that you expect to be responsive to many different incoming requests has very different needs than a single user program you write to do some file management or fetch some content and insert it in a database.
So, if you're not a multi-user server, you may be able to use synchronous I/O everywhere it's offered (most specifically for file access). For example, I have several scripts that do file management on my hard disk. These scripts don't have any server component and are run automatically in the middle of the night to trim backups, trim log files, etc... These scripts are perfectly OK to use synchronous I/O for pretty much anything.
If, on the other hand, you are a mutli-user server and you need to be responsive to incoming requests that can arrive at any time, then the only two times you can/should use blocking I/O or blocking crypto are at startup time or in some sort of shut-down scenario. For all other code in service of incoming requests, you have to use non-blocking, asynchronous I/O to avoid locking up your server during a request and making it non-responsive to new incoming requests.
If you have time consuming, CPU-intensive operations such as image processing or video generation, then you will want to offload that processing to another thread or process so that your main server thread is not blocked doing that processing. A typical way of handling that would be to create a worker pool of N processes/threads that can be sent jobs to crunch on. Then, you keep your most CPU-intensive work out of the main nodejs thread, allowing it to stay responsive to incoming requests.
so experts, can you correct that for me, is there any case that a blocking code would be a must?
Synchronous (blocking) I/O vastly simplifies server startup as you can do things like read configurations synchronously. You could write that code asynchronously, but then your module interface often end up having to return promises that indicate when it's actually ready and done with its initialization which complicates using the module.
For example, require() is synchronous and this really, really helps make initialization a lot simpler.
The only place I know of in a server where blocking code might be required is if you're trying to write something to disk right before your program exits when it's already in the process of exiting. You get notified of an exit event and if you try to use asynchronous file I/O, then your program will exit before the I/O finishes. In that case, you may need to use synchronous file I/O (which is not a problem in that circumstance).
I'm building out an API using Hapi.js. Some of my code is pushing small amounts of data to the API. The issue seems to be that the pusher code is swamping the API and I'm getting ECONNRESET errors -- which means messages are getting lost. I'm planning on installing a rate-limiter in the pusher code, probably node-rate-limiter (link).
The question is, what should I set that limit to? I want to max out performance for this app, so I could easily be attempting to send in thousands of messages per hour. The data just gets dumped into redis, so I doubt the code in the API will be an issue but I still need to get an idea of what kind of message rate Hapi is comfortable with. Do I need to just start with something reasonable and see how it goes? Maybe 1 message per 10 milliseconds?
Hapi = require('hapi');
server = new (Hapi.Server);
server.connection(port: config.port, routes: {
cors: {
origin: ['*']
}
});
server.route({method: 'POST', path: '/update/{id}', ...})
There is no generic answer to how many requests per second you can process. It depends upon many things in your configuration and code such as:
Type and performance of server hardware
The amount of CPU time an average request uses
Whether your requests are CPU or disk bound. If disk bounded, then it depends a lot on your database and disk performance.
Whether you implement clustering to use multiple cores (if CPU bound)
Whether you're on shared infrastructure or not
The max number of incoming connections your server is configured for
So, there is no absolute answer here that works for everyone. If you don't have some sort of design problem that is artificially limiting your concurrency, then the best way to discover what your server can actually handle is to build a test engine and test it. Find where and how it fails and either fix those issues to extend the scalability further or implement protections to avoid hitting that limit.
Note: When a public API makes rate limiting choices, it is typically done on a per-client basis and the limit is set to a value that seems to be a little above what a reasonable client would be doing. This is more to allow fair use of the server by many clients to that one single client does not consume too much of the overall resource. If issuing thousands of small requests from a single client is not considered "good practice" in using your API, then you can just pick a number that is much smaller than that for a per-client limit.
Note: You may also want to make it easier for clients by having your API let them upload multiple messages in one API request rather than lots of API requests.
I am new to performance optimization, and while I recognize nodejs may not be the most beginner friendly place to start, it's the task at hand.
The observation: simple JSON API requests take in the order of hundreds of milliseconds on a staging server with no load and <10 users in the database. Particularly, the call to /api/get_user is taking ~300ms
to execute this code:
exports.get_user = function(req, res) {
return res.json(req.user)
}
(Note: we store our sessions in Redis)
The stack:
Nodejs
Express
Redis
Mongo
Where do I start?
While it might be an overkill for this small scenario, you might want to consider profiling. I found the nodetime.com service quite useful.
Passing the –-nouse_idle_notification flag will tell V8 to ignore idle notification calls from Node, which are requests to V8 asking it to run GC immediately, as the Node process is currently idle. Because Node is aggressive with these calls (efficiency breeds clean slates), an excess of GC may slow down your application. Note that using this flag does not disable GC; GC simply runs less often. In the right circumstances this technique can increase performance.
So I'm starting to use node.js for a project I'm doing.
When a client makes a request, My node.js server fetches from another server a json and then reformats it into a new json that gets served to this client. However, the json that the node server got from the other server can potentially be pretty big and so that "massaging" of data is pretty cpu intensive.
I've been reading for the past few hours how node.js isn't great for cpu tasks and the main response that I've seen is to spawn a child-process (basically a .js file running through a different instance of node) that deals with any cpu intensive tasks that might block the main event loop.
So let's say I have 20,000 concurrent users, that would mean it would spawn 20,000 os-level jobs as it's running these child-processes.
Does this sound like a good idea? (A different web server would just create 20,000 threads on the same process.)
I'm not sure if I should be running a child-process. But I do need to make a non-blocking cpu intensive task. Any ideas of what I should do?
The people who say that don't know how to architect solutions.
NodeJS is exactly what it says, It is a node, and should be treated like such.
In your example, your node instance connects to an external api and grabs json to process and send back.
i.e.
1. Get // server.com/getJSON
2. Process the json
3. Post // server.com/postJSON
So what do you do?
Ask yourself is time an issue? if so then node isnt the solution
However if you are more interested in raw processing power so instead of 1 request done in 4 seconds
You are interested in 200 requests finishing in 10 seconds, but each individual one taking about the full 10 seconds.
Diagnose how long your JSON should take to massage, if it is less than 1 second.
Just run 4 node instances instead of 1.
However if its more complex than that, Break the json into segments to process. And use asynchronous callbacks to process each segment
process.nextTick(function( doprocess(segment1); process.nextTick(function() {doprocess(segment2)
each doProcess calls the next doProcess
Node js will trade time between requests.
Now Take that solution and scale it too 4 node instances per server, and 2-5 servers
and suddenly you have an extremely scaleable and cost effective solution.
I am writing a socket.io based server and I'm trying to avoid the pyramid of doom and to keep the memory low.
I wrote this client - http://jsfiddle.net/QUDXU/1/ which i run with node client-cluster 1000. So 1000 connections that are making continuous requests.
For the server side a tried 3 different solutions which i tested. The results in terms of RAM used by the server, after i let everything run for an hour are:
Simple callbacks - http://jsfiddle.net/DcWmJ/ - 112MB
Q module - http://jsfiddle.net/hhsja/1/ - 850MB and increasing
Async module - http://jsfiddle.net/SgemT/ - 1.2GB and increasing
The server and clients are on different machines. (Softlayer cloud instances). Node 0.10.12 and Socket.io 0.9.16
Why is this happening? How can I keep the memory low and use some kind of library which allows to keep the code readable?
Option 1. You can use the cluster module and gracefully kill your workers from time to time (make sure you disconnect() first). You can check process.memoryUsage().rss > 130000000 in the master and kill the workers when they exceed 130MB, for example :)
Option 2. NodeJS has the habit of using memory and rarely doing rigorous cleanups. As V8 reaches the maximum memory limit, GC calls are more aggressive. So you could lower the maximum memory a node process can take up by running node --max-stack-size <amount>. I do this when running node on embedded devices (often with less than 64 MB of ram available).
Option 3. If you really want to keep the memory low, use weak references where it is possible (anywhere except in long-running calls) https://github.com/TooTallNate/node-weak . This way, the objects will get garbage collected sooner. Extensive tests to make sure everything works are needed, though. GL if u use this one :) https://github.com/TooTallNate/node-weak
It seems like the problem was on the client script, not on the server one. I ran 1000 processes, each of them emitting messages to the server at every second. I think the server was getting very busy resolving all of those requests and thus using all of that memory. I rewrote the client side like this, spawning a number of processes proportional to the number of processors, each of them connecting multiple times like this:
client = io.connect(selectedEnvironment, { 'force new connection': true, 'reconnect': false });
Notice the 'force new connection' flag that allows to connect multiple clients using the same instance of socket.io-client.
The part that solved my problem was actually how the requests were made: any client would make another request after a second from receiving the acknowledge of the previous request, not at every second.
Connecting 1000 clients is making my server using ~100MB RSS. I also used async on the server script which seems very elegant and easier to understand than Q.
The bad part is that I've been running the server for about 2-3 days and the memory rised at 250MB RSS. This, I don't know why.