Does a node.js app with thousands of concurrent users really need to use connection pooling mechanism ?
EDITED:
App could be an ecommerce app that requires high volume for reading and writing to databases.
Not necessarily. It depends in what situation. You should be able to handle thousands of concurrent connections but of course it all depends on what you do in those connection handlers. This is the only answer that can really be given with so little details in the question.
Related
I have deployed my nodeJS backend on Heroku Hobby dynos. There are 1500+ active users. So the API response time is very slow some times, Please help to figure out which dynos is better for backend deployement.
It always depends on your application. What type of operations and workload are you handling in your API, do you have any synchronous/blocking operation? Is there a lot of I/O involved? More information about what you are trying to achieve would be helpful to give a better recommendation.
One best practice for Node.js is to scale horizontally, this means, having multiple small servers to handle traffic instead of having one big server (vertical scaling). So, a good recommendation is to scale using multiples dynos, try to scale to 2 and measure again to see if it fits your performance needs.
Some recommended readings:
Good practices for high-performance and scalable Node.js applications
Optimizing Node.js Application Concurrency
I use Redis because it allows me to scale my applications horizontally (multiple servers). By using it's pub sub features all my servers can communicate with each other without needing to share memory.
So far, cool! We can add more nodejs servers, BUT all this servers subscribe to one single Redis server. So we have the situation in which we have many NodeJs servers communication to just one Redis server, we can serve more clients but still we have one Redis.
From my tests the Redis server uses less resources so can handle more, but still in this design I think is a SPF. What do you think?
What are the best ways to design a scalable system? I know about master/slave Redis but still I am not convinced if it is the best solution.
Yes, Redis is a single point of failure in what you describe. Not only in a sense that when it is down then your app is down, but also in the sense that if one of your processes remove or corrupt the data then it is lost forever.
What you can do is use multiple Redis servers and have a good backup strategy.
See this tutorial for clustering:
https://redis.io/topics/cluster-tutorial
See these tutorials for backups:
http://zdk.blinkenshell.org/redis-backup-and-restore/
https://redis.io/topics/persistence
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-your-redis-data-on-ubuntu-14-04
I am looking to implement some sort of chat server. And I want it to scale. This seems like a big question, so I guess I expect the answers to be direction pointers, sort of exploratory.
The end-user clients are web or phone client. I think some sort of websocket implementation, such as Socket.IO is nice.
On the server side I wish to use Node.js. I want the architecture to be scalable so that the number of users are not limited (well, within reason, the chance of big hit is not expected, and if it is, the chance of having smarter, experienced people to work on it is reasonable instead of currently just me coding) The number of users per chatroom is hopefully not limited, or maybe some fixed large number. And that means I need to scale horizontally using several servers written in Node.
Suppose some load balancer (and hopefully in the future not a single point of failure, but I don't know how I would achieve that, or maybe just move to AWS) are dispatching SocketIO connections from the end clients to the chat servers. Different users connection to different servers may be in the same room, so the messages need to be send to other servers.
How would I feasibly implement something like this? Hopefully not too complex.
Questions:
(1) If all servers need to handle all messages as users can be logged on via any of the servers, does this scale?
(2) Do I need some sort of message queue for the servers to talk among them? Is Pub-sub from Rabbitmq usable for this? Or if zeromq, how would I scale with pub sub? The Zeromq guide is has explanations for scaling to more than one server with REQ/REP type of applications. But not Pub Sub.
(3) Or should I start with XMPP?
I am hoping to make it work as easy as possible.
There's a rather good explanation at the Socket.io site. Have a look at
http://socket.io/docs/using-multiple-nodes/
It suggests using Nginx as HTTP load balancer, Node.js clustering (with sticky sessions) and Redis as the message backend.
I think your goals should be achievable with little to none coding involved, only using the given modules and configuration mechanisms.
Thinking of creating a real-time app where users can collaborate. Found node.js + socket.io to be one of the solutions for this type of problem.
I hear from other developers that there will be a bottleneck as far as number of sockets my server will give to users. So if I have hundreds of users collaborating at same time, number of open sockets will run out and users will not be able to connect. Is this a valid concern?
update: on sort of related note I'm looking to use SockJS instead of Socket.io. There is a thread that explains pros and cons of these libraries. Also this is a good read.
For hundreds of users I don't think it is a concern.
Sockets as you know have persistent connection between the client and the server and both parties can start sending data at any time. Keeping them open is not a problem as much as the handling the load in terms of messages sent/second.
Socket.io can easily handle 1000 concurrent connections. But it will fail if it is sending more than 8-10k messages per second. You will hit the load barrier before your sockets are exhausted. In most cases handling more concurrent users translates to higher load. So don't worry about getting low on sockets. Trying to scale beyond that barrier would require more server resources.
Helpful links :
Socket.IO - are the open connections a concern?
http://www.quora.com/How-do-I-scale-socket-io-servers-2
There are already solutions using this approach like Cloud9 and it works good. There will be a point where you will need to scale out. So if you are planing something big I would think about it.
Here are some tests on sockets.io with 10,000 concurrent connections. Looks like it's good solution but not easy one because of fallback mechanism.
I want to test how many concurrent connections a node.js application can handle. What is the best way to do this? I have an application in node.js ready. Which will be the best way to test its capacity?
If your application is using HTTP, I recommend Siege.