I am using pm2 in my app (ubuntu 14.04, 2cpu 4gb ram).
I want to load-test or stress test the load-balance between clusters (I hope i'm saying it right) for it's effectiveness, What is the best way to do so?
I am using latest node.js (0.12.7) and latest pm version, i have 2 clusters going for me, 1 for each cpu.
Now i need to check response time when my server is at it's limits, even to see when it crashes and why (It's a staging server so i don't mind)
I know 'siege', used it a bit, not the one i want, i want something that can push the server to it's limits...
Any suggestions?
You can try http://loadme.socialtalents.com
In case free tier will be not enough you can request quota increase.
Related
We have a production chat app built in socketio/nodejs.
We use express.
Nodejs is a bit old : 10.21.0
SocketIO in 3.1.1
Our computer is a VM with 4vCPU and 16 GB RAM.
We use pm2 to manage starting node app with env variables.
We are facing an issue when there are about 500 users in chat and when they write. Bandwidth usage is around 250 Mbps in upload (but we have 10G so no issue). Issue begins here, we can see in our logs full of connection/disconnection and pm2 restart app.
In checking in more details, in launching "pm2 monit" we can see that only one processor is used and it is higher than 100% most of the time.
We read few documentation about clustering (cluster + fork). It seems to be interesting but in our case when we tested it, it's like we had few chat apps so for the same "chat room", users are in different workers so it's not OK.
Do you have an idea how we can fix that and use all processor/core ?
We are already thinking of starting with upgrading nodejs?
Thanks
Niko
Since Node.js is always single-threaded (aside from worker threads), upgrading Node won't get you much anywhere (aside from newer Nodes shipping newer V8 engines, which might be faster).
it's like we had few chat apps so for the same "chat room", users are in different workers so it's not OK.
This sounds like you've architected your app to use global variables or in-process state like that for these shared rooms. If you want to use cluster or PM2's multiple process mode, that state will need to live somewhere else, maybe a second Node application or, say, a Redis server.
My company has it's own Apache Tomcat server, running MySQL and PHP websites on it.
Can I install Node.js and MongoDB in there without affecting other projects?
Yes you can install . But before doing that you have to take care of this thing.
How much physical memory you have ?
How much amount of RAM is average free for use ?
How much CPU core you have free to use ?
How much CPU utilization currently happening?
Then you can look to node.js howmuch CPU core you need for node.js .
If you have simple app then 1 core . And check same for mongodb . How much data you need to same .
No one can tell without knowing this parameter . You have to look on this parameter and judge if you are good to go .
I would personally recommend that your company, starts using virtualization technology. e.g Hyper-V or VMware-esxi. Then you can have your primary production server running with all of your client's websites and/or applications on one Virtual Machine/Appliance. Then make a copy of that server on to a secondary VM. Now you can safely test new software and/or projects. Inclusively with VMs you can enable full system backups easily making any major system failure easily recoverable.
I am running a node app on a Digital Ocean cloud server, and the app merely services API requests. All client-side assets are served by a CDN, and the DB is accessed remotely, rather than stored on the server instance itself.
I have the choice of a greater number of vCPUs or RAM. I have no idea what that means in any way, so any feedback is a great help.
A single node.js server will run your Javascript on only one CPU so it doesn't help your Javascript run any faster to have more CPUs unless you cluster your app and run multiple node.js processes sharing the load of your app or unless there are other processes on the same server that are being used by your server.
Having more RAM (memory) will only improve things if you actually need more RAM. That depends entirely upon what the memory usage profile is of your app and how much RAM you already have available. Probably, you would already know if you were running out of RAM because you either get drastic slow-down when the OS starts page swapping or your process crashes when out of memory.
So, in order to know which would benefit you more, you really need more data on how your existing app is performing (whether it is ever bog down with CPU intensive operations and how much RAM it uses compared to how much you have available). It is quite possible that neither will actually matter to you - it totally depends upon the usage profile or your server process.
If you have no more data than this and have to make a choice, choose the vCPUs because there are some circumstances where it might help you (and gives you the option to go to clustering in the future if needed) whereas adding more RAM when you aren't even using what you already have won't help you at all.
I have just learned about Heroku and was pretty much excited to test it out. Ive quickly assembled their demo's with Node.js Language and stumbled across a problem. When running the application locally, apache benchmark prints roughly about 3500 request/s but when its on the cloud that drops to 10 request/s and does not increase or lower based on network latency. I cannot believe that this is the performance they are asking 5 cents/hour for and highly suspect my application to be not multi-threaded.
This is my code.js: http://pastebin.com/hyM47Ue7
What configuration do I need to apply in order to get it 'running' (faster) on Heroku ? Or what other web servers for node.js could I use ?
I am thankful for every answer on this topic.
Your little example is not multi-threaded. (Not even on your own machine.) But you don't need to pay for more dyno's immediately, as you can make use of multiple cores on a dyno, see this answer Running Node.js App with cluster module is meaningless in Heroku?
To repeat that answer: a node solution to using multiple processes that should increase your throughput is to use the (built-in) cluster module.
I would guess that you can easily get more than 10 req/s from a heroku dyno without a problem, see this benchmark, for example:
http://openhood.com/ruby/node/heroku/sinatra/mongo_mapper/unicorn/express/mongoose/cluster/2011/06/14/benchmark-ruby-versus-node-js/
What do you use to benchmark?
You're right, the web server is not multi-threaded, until you pay for more web dynos. I've found Heroku is handy for prototyping; depending on the monetary value of your time, you may or may not want to use it to set up a scalable server instead of using EC2 directly.
Ok so I have an idea I want to peruse but before I do I need to understand a few things fully.
Firstly the way I think im going to go ahead with this system is to have 3 Server which are described below:
The First Server will be my web Front End, this is the server that will be listening for connection and responding to clients, this server will have 8 cores and 16GB Ram.
The Second Server will be the Database Server, pretty self explanatory really, connect to the host and set / get data.
The Third Server will be my storage server, this will be where downloadable files are stored.
My first questions is:
On my front end server, I have 8 cores, what's the best way to scale node so that the load is distributed across the cores?
My second question is:
Is there a system out there I can drop into my application framework that will allow me to talk to the other cores and pass messages around to save I/O.
and final question:
Is there any system I can use to help move the content from my storage server to the request on the front-end server with as little overhead as possible, speed is a concern here as we would have 500+ clients downloading and uploading concurrently at peak times.
I have finally convinced my employer that node.js is extremely fast and its the latest in programming technology, and we should invest in a platform for our Intranet system, but he has requested detailed documentation on how this could be scaled across the current hardware we have available.
On my front end server, I have 8
cores, what's the best way to scale
node so that the load is distributed
across the cores?
Try to look at node.js cluster module which is a multi-core server manager.
Firstly, I wouldn't describe the setup you propose as 'scaling', it's more like 'spreading'. You only have one app server serving the requests. If you add more app servers in the future, then you will have a scaling problem then.
I understand that node.js is single-threaded, which implies that it can only use a single core. Not my area of expertise on how to/if you can scale it, will leave that part to someone else.
I would suggest NFS mounting a directory on the storage server to the app server. NFS has relatively low overhead. Then you can access the files as if they were local.
Concerning your first question: use cluster (we already use it in a production system, works like a charm).
When it comes to worker messaging, i cannot really help you out. But your best bet is cluster too. Maybe there will be some functionality that provides "inter-core" messaging accross all cluster workers in the future (don't know the roadmap of cluster, but it seems like an idea).
For your third requirement, i'd use a low-overhead protocol like NFS or (if you can go really crazy when it comes to infrastructure) a high-speed SAN backend.
Another advice: use MongoDB as your database backend. You can start with low-end hardware and scale up your database instance with ease using MongoDB's sharding/replication set features (if that is some kind of requirement).