How do I get quasar working optimally with JSF? I created a JSF project with Comsat Quasar integration on the lines of FiberHttpServlet but do not see any improvement in numbers.
My project is here : https://github.com/sanketsw/Quasar_JSF_Demo
Unfortunately the Fibers were not found to have impacted any postively when tested from JMeter(server capped to 50 threads and JMeter test running 3000 users). The response was exactly same as javax.faces.webapp.FacesServlet upto 500 users. For 3000 users, the error rate of the requests failure was more and the response time was also significantly higher than normal FacesServlet for successful requests.
If you happen to work on this thread further and have any better results, let me know. Or please see if I am making any mistake in the configuration.
The benefit about using fibers to serve HTTP requests rather than threads is that you can have many more active fibers than threads with far less resources, so you're likely going to see improvements in increased concurrency rather than in average response time at low concurrency. If you run only 50 concurrent requests (that's what I understand from your description, couldn't find the benchmark scripts in the GitHub project) you can just use regular thread-based servlets with a thread pool size > 50 and probably you'll even get better latency (as you're not deferring requests completion and not using all the machinery needed to track that inside your servlet container, nor potential thread handovers with cache impacts etc).
The benefit of fibers (as of async servlets or other async frameworks, which are much more cumbersome to program though) is that you're able to support many more concurrent requests, more than the threads that your box can support (say more than 15k), which usually means you are being hit with an extremely high request rate and/or your requests require substantial processing time. Please also note that usually such high rates require tuning of the OS network stack to allow more open file descriptors, faster connections turnaround etc.
Also note that the recently released Comsat 0.6.0 improves servlet and web actors performance. Anyway Comsat servlets rely on servlet async functionality of the servlet container you run them on, so if that implementation is buggy or not very efficient this carries over to Comsat servlets too. I got good results with Undertow (just wait for the first request to complete before making the service available or you're going to get errors).
Note: I'm part of the Parallel Universe team.
Related
I have an express backend application, which listens for http requests from a web application. This backend application is running on AWS ECS Fargate.
So my question is, does it makes sense use multithreading, worker-threads in Node.js, in this backend application? There are both CPU intensive and non-intensive functions in the endpoints. For instance should I just distribute any incoming request right away ,regardless of the intensity of the call, to other threads so that Main thread is never blocked? Or should I only use multithreading on intensive jobs etc.
Any suggestion, pros and cons on this topic is very much appreciated.
Questions like this ultimately only really get answered by profiling the system under representative load. This is because our guestimates about what takes CPU and how much are just rarely very good - this stuff has to be measured to really know.
That said, there are some general design guidelines you could consider:
If you have specific requests that you believe in advance are likely to be CPU heavy, then you may want to consider putting those into a queue that is served by a pool of threads.
If you have specific requests that you believe in advance are really just doing non-blocking I/O, then those don't need to get complicated at all by threading. The main thread can likely handle those just fine.
If only a relatively small fraction of your requests are CPU heavy, the simplest design may be to use the nodejs clustering module and just let it spread your requests out among CPUs figuring that this by itself (without any other major design changes) will get any CPU-bound requests off the main loop. If a high percentage of your requests are CPU-bound and you want to prioritize the non-CPU requests so they are always quick, then you'd probably be better with the thread pool that CPU-heavy requests are passed off to so the non-CPU heavy requests aren't held up by them.
So my question is, does it makes sense use multithreading, worker-threads in Node.js, in this backend application?
Yes, in some circumstances. But, when to add this extra layer of complication really depends upon the specific metrics of CPU usage in your requests.
For instance should I just distribute any incoming request right away ,regardless of the intensity of the call, to other threads so that Main thread is never blocked? Or should I only use multithreading on intensive jobs etc.
This depends upon how you want to prioritize different requests. If you want requests prioritized in a FIFO order, where each one gets to start in order based on when it arrived, regardless of what type of request, then yes you can just distribute all requests to threads. In fact, it's probably easier to just use the clustering module for this because that's what it does. If, on the other hand, you want non-CPU-heavy requests to always run quick and not have to wait behind CPU-heavy requests, then you may want to push only the CPU-heavy requests into a queue that is processed by a thread pool. This, then allows the main thread to work on the non-CPU-heavy requests immediately regardless of how many CPU-heavy requests are currently crunching away in the thread pool.
I have a very simple nodejs application that accepts json data (1KB approx.) via POST request body. The response is sent back immediately to the client and the json is posted asynchronously to an Apache Kafka queue. The number of simultaneous requests can go as high as 10000 per second which we are simulating using Apache Jmeter running on three different machines. The target is to achieve an average throughput of less than one second with no failed requests.
On a 4 core machine, the app handles upto 4015 requests per second without any failures. However since the target is 10000 requests per second, we deployed the node app in a clustered environment.
Both clustering in the same machine and clustering between two different machines (as described here) were implemented. Nginx was used as a load balancer to round robin the incoming requests between the two node instances. We expected a significant improvement in the throughput (like documented here) but the results were on the contrary.
The number of successful requests dropped to around 3100 requests per second.
My questions are:
What could have gone wrong in the clustered approach?
Is this even the right way to increase the throughput of Node application?
We also did a similar exercise with a java web application in Tomcat container and it performed as expected 4000 requests with a
single instance and around 5000 successful requests in a cluster
with two instances. This is in contradiction to our belief that
nodejs performs better than a Tomcat. Is tomcat generally better
because of its thread per request model?
Thanks a lot in advance.
Per your request, I'll put my comments into an answer:
Clustering is generally the right approach, but whether or not it helps depends upon where your bottleneck is. You will need to do some measuring and some experiments to determine that. If you are CPU-bound and running on a multi-core computer, then clustering should help significantly. I wonder if your bottleneck is something besides CPU such as networking or other shared I/O or even Nginx? If that's the case, then you need to fix that before you would see the benefits of clustering.
Is tomcat generally better because of its thread per request model?
No. That's not a good generalization. If you are CPU-bound, then threading can help (and so can clustering with nodejs). But, if you are I/O bound, then threads are often more expensive than async I/O like nodejs because of the resource overhead of the threads themselves and the overhead of context switching between threads. Many apps are I/O bound which is one of the reasons node.js can be a very good choice for server design.
I forgot to mention that for http, we are using express instead of the native http provided by node. Hope it does not introduce an overhead to the request handling?
Express is very efficient and should not be the source of any of your issues.
As jfriend said , you need to find the bottlenecks ,
one thing you can try is to reduce the bandwith/throughput by using sockets to pass the json and especially this library https://github.com/uNetworking/uWebSockets.
The main reason for that is that an http request is significantly heavier than a socket connection.
Good Example : https://webcheerz.com/one-million-requests-per-second-node-js/
lastly you can also compress the json via (http gzip) or a third party module.
work on the weight ^^
Hope it helps!
I have a simple nodejs webserver running, it:
Accepts requests
Spawns separate thread to perform background processing
Background thread returns results
App responds to client
Using Apache benchmark "ab -r -n 100 -c 10", performing 100 requests with 10 at a time.
Average response time of 5.6 seconds.
My logic for using nodejs is that is typically quite resource efficient, especially when the bulk of the work is being done by another process. Seems like the most lightweight webserver option for this scenario.
The Problem
With 10 concurrent requests my CPU was maxed out, which is no surprise since there is CPU intensive work going on the background.
Scaling horizontally is an easy thing to, although I want to make the most out of each server for obvious reasons.
So how with nodejs, either raw or some framework, how can one keep that under control as to not go overkill on the CPU.
Potential Approach?
Could accepting the request storing it in a db or some persistent storage and having a separate process that uses an async library to process x at a time?
In your potential approach, you're basically describing a queue. You can store incoming messages (jobs) there and have each process get one job at the time, only getting the next one when processing the previous job has finished. You could spawn a number of processes working in parallel, like an amount equal to the number of cores in your system. Spawning more won't help performance, because multiple processes sharing a core will just run slower. Keeping one core free might be preferred to keep the system responsive for administrative tasks.
Many different queues exist. A node-based one using redis for persistence that seems to be well supported is Kue (I have no personal experience using it). I found a tutorial for building an implementation with Kue here. Depending on the software your environment is running in though, another choice might make more sense.
Good luck and have fun!
We host about 150 websites (possibly scaling to 300+) that we are considering migrating to node.js. Most of the sites are fairly low traffic <1mil pageviews per month.
Should each website be it's own node.js process, or should we serve all websites using the same node.js process (or small set of load balanced processes). Is there a technical limit or a reasonable limit to the number of node processes per server?
Process per site: Feels inefficient, but I don't know if it actually is inefficient. Would ensure one buggy site doesn't affect other sites.
Process per core/small set of processes: Likely higher performance, but what happens when I need to update a sites codebase, won't it take down other sites? Also, code failures in one site would affect other sites.
Ideally, I would prefer one process per site so that we could host all sites from each worker server. That way when load increases we can just spin up another identical worker server and load balance between the two without having to arbitrarily say SiteA goes to ServerA and SiteB goes to ServerB. Any node.js gurus available to offer some wisdom?
All static file requests will be handled likely by Nginx or something like Varnish.
There are a lot of issues at play here. The big picture answer is, it depends... as it always does when you bring in the whole "performance" discussion. That being said, the simplest way to get a solid Node set up is to note the following basic facts about NodeJS, and I will also comment on their implications as they pertain to your questions.
The concurrency you get with Node works really good in certain situations, namely IO heavy operations. What we're really talking about here is minimizing the amount of downtime to wait for the next request. Because of this, Node works really well in an environment where there is one process per core on a machine. Node does really well at maximizing the amount of CPU available to serve requests under heavy load. This being said, if you have literally ZERO other work going on in your even loop, you can see minor performance increases (in terms of max requests/second/processor core) by having multiple node processes per core. But, I've never seen any benefit from increasing this number past 3. Even under circumstances where the entire event loop was literally just a file server.
On the process per site comment. This is a bad idea for many reasons. For one, a well put together node server can process thousands of requests per second. Our (company name omitted) servers, hosted through Amazon EC2 on medium clusters (lots of ram, mid CPU clock, 4 cores), typically fail around 3000 requests per second per cluster. Our servers do a fair bit of CPU work, for simple file servers I'm sure you can do much better. Strictly speaking, sure, per site, you will be able to serve more requests by launching each site in its own process/core/escalating quickly here! But it's not necessary from a cost and over complication of your architecture point of view. What I WOULD recommend, is investing in a setup with a lot of RAM. The ability for your server to cache often requested files will effect your performance infinitely more than launching an abundance of processes for a given machine.
On the whole RAM thing. The number of processes you want to launch for a given core is dependant on two things. One is how much synchronous work done in your event loop. The more synchronous work, the more time between a given request coming in and the event loop being ready to adress the next one. If you have a busy event loop, you will be in a situation where you require more processes/CPU Core. The other thing that can effect this, particularly relevant for file servers, is the amount of RAM. Node runs much better in a high ram environment, but you can say this about ANY file server really... What this has to do with, is the number of active asynchronous operations. One downside of the way node works, is under heavy loads, you can get a large number of event handlers active at once. This is great for concurrency/simplicity, however, if your server is busy waiting around for a lot of async disk/IO to happen it will slow down and crash much sooner than if you had plenty of RAM. If you don't have enough RAM to handle all of these event handlers, you will want to keep to the 1 process/core arrangement. Otherwise, it is easier for Node to spin up many event handlers simultaneously, and again cause you to crash sooner than you would otherwise.
I don't really have enough information to tell you what you SHOULD do. This depends entirely too much on the architecture of your specific server, sites, size of your sites, amount of data... etc. But these three pieces of knowledge are the basic things that help you get the most out of your Node server. To be honest, your idea about load balancing mixed with the considerations above, should do nicely for you. Surely, microoptimizations are possible, but if you do these things, you should easily see requests/second in the thousands before you start experiencing crashes because of DDOS type of conditions.
No, don't do it. Keep it simple! And check out http://12factor.net/.
A few hundred processes is nothing compared to the simplicity you otherwise lose. It would be a terrible decision, on so many levels, to have more than one site (or, "logical application unit") served by a single Node process.
If you're asking this question, you may want to explore Node more before you "migrate" to Node. Error handling and separation of concerns are more complicated in Node than in other situations. Specifically, neither the domain nor cluster APIs are mature. But really it's the philosophy of clean and simple application deployment that you'd be violating. I could go on and on.
I am trying to learn Node.js and some of points that I understand:
Node.js does'nt create a seperate process for each request, instead it is just one process which processes all requests.
It is asynchronous which means you can attach a callback to a long-lasting process and continue your rest of the work without waiting for it to finish.
What I really don't understand is author's point in Understanding node.js - "Everything runs in parallel except your code". I have understood the analogy and the code that explains it but still I don't get it what is the distinction between "Everything" and "code". I have more often heard this about node.js.
Also, people pat node.js for its efficiency since memory overhead for one concurrent connection may be as low as 8KB but what about CPU load. Does node.js make it way less as compared to PHP+Apache?
Node.js uses a single thread any time it is running the JavaScript in your application. Tasks that are asynchronous (network, filesystem, etc.) are all handled on separate threads automatically for you. This means that you get much of the usefulness of a multithreaded application without having to worry about all of the trouble that comes with locking resources and what not.
Node is not a tool for every job. It is ideal for applications that are IO bound. For example, if your application required a ton of work to process templates and what not, Node probably isn't for you. If instead you're just shuffling data around, Node can be very effective.
The reason Node is often quoted as being faster than servers like Apache is that it doesn't create a thread and all of the resources with it to handling requests. In Apache, most of the time, that thread handling requests is waiting on network or filesystem data. While it does this, it is wasting resources. With Node, only one thread processes those requests (in your application). Again, this is great for some things, but if you have a lot of processing to do, Node would not be effective as it can really only handle a single request at a time in these situations.
This video does a pretty good job of explaining: http://www.youtube.com/watch?v=F6k8lTrAE2g&feature=youtube_gdata
Everything runs in parallel except your code.
It means if you do
while(true){}
anywhere in your code the entire node application will stop. While the code you write executes, nothing else does. Requests will not be handled, responses won't be returned, nothing. You have to be extremely careful to not hog the cpu in node.
but what about CPU load?
That completely depends on the nature of your application and the load. If your app is busy, it'll use more cpu.
Imagine a busy intersection with a traffic cop in the middle. When the cop is doing his job properly, hundreds of cars can pass through the intersection in a very fast and efficient way.
If the cop starts receiving and answering SMS messages on his cell while doing traffic, then things might go out of hand really quickly.
The traffic cop is your node.js app, and the time he spends doing SMS is what the author refers to as "your code".
In other words: node.js performance will shine the more you use it as a traffic cop. The more you start using it to do things other than pulling and pushing data (i.e.: sorting a list of numbers, rendering an html template, etc.), the more your capacity to accept and process new connections quickly will suffer.
"Everything" refers to everything else besides your code. For example, the stuff that handles HTTP. Another way to say the same thing is "your code doesn't wait for node.js to do stuff, like send data over TCP, because that's done asynchronously."
To answer your second question, I don't know which has less CPU load, I'm guessing they're similar. Node.js' touted advantage is the CPU is better utilized due to the aforementioned asynchronicity.