Okay so I'm trying to build a web service for twitter automation. I have build bots before that works only for a single account.
The service that I'm looking to build will enable users to register and then set up some preference (like follow users that tweets #ILoveProgramming).
My main question is - right now if I need to run the bot I just type "node bot.js" and the bot will run. How do I run multiple process for all the hundred users that'll be using my Web Service ?
NodeJs provides clustering module which would spin up multiple process of your program.
It basically involves creating a master and workers processes.
The docs for it are available here
Also I would suggest using process managers like pm2, which provide support for clustering and manage them very well. You could read about it here.
With pm2, doing something like
pm2 start bot.js -i max
would start the bot server for all the cpu's/cores in your device.
Running a process for every user will not be a scalable way to achieve this. You can try using async.each to run the same function for every user in parallel.
You can specify cluster information in pm2 configuration file like this,
File: process.yml
apps:
- script : <your script name>
name : <name of your app>
instances: <number of instances you want to run>
exec_mode: cluster
and then start pm2 like this,
pm2 start process.yml
Related
In my project I have several servers which run NodeJS applications using PM2, those were not created by me. I am not that familiar with the PM2. Now I need to start a new server, which is simply a CRON process that queries an ElasticSearch instance.
There are no routes or anything in it, just a CRON with some logging.
Here is my dilemma. I have played with PM2 and I become somewhat familiar with what is it, and what it does. But the question is how shall I run it?
The previous projects do have PM2 config.json with many parameters, and they are started in cluster mode (handled with Nginx), and when I start them I see all process's becoming daemons. But in my case I don't need that. I just need it to run as a single service.
In other words if I use the configuration file to run the PM2, I see it spawned in cluster mode, and it creates chaos as my CRON is fired many times. I don't need that. If I start it in Fork mode, it also spawns instances, but all of them die, except one (due to which they are using same port). I also don't need that.
I just need single service.
I managed to run the my CRON app.js with the singe line, simple as:
PM2 start app.js. It runs in single thread, and I can see it's info with PM2 status. All fine.
If I run it with the single line(as in my case), is it considered ok? Based in my knowledge if I use config.json, it will always run it in fork or cluster.
Is it ok to run it in single line, or do I need still to use a config.json file.
If you only need one process to be run, as is the case, you're doing the right thing.
we want to have ability to run demo server independently from testing env. For these purposes we use two nodejs instances run with different options(ports).
I am trying to implement two different CI jobs for managing these two servers independently but see no way to stop nodejs instances selectively.
Is there a way to make something like named instances to stop/start/restart them separately?
Or it is only way to use something like forever to watch file changes(want to avoid this because of time limit)?
You can use forever.js to name each application you run
forever --uid app start app.js
forever --uid app1 start app.js
Then you can restart individual apps
forever restart app1
I am currently running an express server using the node js vanilla cluster setup as demonstrated over here:
http://rowanmanning.com/posts/node-cluster-and-express/
I'd like to move the server over to sails.js and I am wondering if anyone knows how to configure sails to support the node cluster (no proxies, just simple cluster).
TX,
Sean.
First thing - if You want to use sessions, You need to use session store. Otherwise session will be not shared between instances of Your app.
Then, The easiest way is to use something like PM2, which can be found here: https://github.com/Unitech/pm2
You dont need to do changes in Your app.js files - it should be written as standard non-clustered sails app. PM2 will do the job.
Just start the app with pm2 start app.js -i x where x is number of instances or use pm2 start app.js -i max which will start instances that are equal to numbers of processors, or processor threads.
PM2 is great and very stable, it has many features to run it smoothly in producation, however it has some flaws in development. If You will ever have a problem with "port already in use" after stopping or even deleting app that was using it - You will have to use pm2 kill which will kill ALL Your apps.
Other than that - its just great - with some additional monitoring tools.
You can use PM2 Library a create different instances like cluster.
For do it you have to use app.js file, like:
pm2 start app.js
If you want to run the max number of instances availables:
pm2 start app.js -i max
check the documentation for more: https://github.com/Unitech/pm2
I'm working through the StrongLoop's getting started instructions and created my sample app. Whilst the instructions tell me to use:
slc run .
to start my application, I noticed I can equally run my application with:
node app.js
and get the same result. Obviously by using the second approach I can integrated my StrongLoop application with tools such as forever.
So my question is, what extra benefits does slc run offer? Does it have functionalities such auto restart etc?
You can do more with slc than node app.js.
slc is a command line tool for StrongLoop, which has more features. If you just want to run the app, it doesn't matter much, but if you want to do more, you can.
Here's the documentation: http://docs.strongloop.com/display/SLC/StrongLoop+Controller
It doesn't have much features for development (such as auto restart and such), but it will help with managing servers and what not.
My favorite feature is scaling a node app using slc.
You can do "slc run . size 2". This will spin up 1 master and 1 worker process which is part of a single cluster. Now if my workload increases, and resources are low, which I know using strongOps monitoring (slc strongops) and I want to scale the app without having to stop the app and re-engineer, I can just do the following:
"slc clusterctl size 4". This will spin up 2 more worker processes and automatically attach them to the same application cluster at run-time. The master will auto distribute the workload now to the new processes.
This is built on top of node cluster module. But there is much more to it. Used cluster-store to store shared cluster state objects too.
Another feature is "slc debug". Launches Node Inspector and brings the application code in runtime context and helps me in debugging, load source maps and iterate through test runs.
Based on the latest release at the moment (v2.1.1), the main immediate benefit of running slc run instead of node app.js is you get a REPL at the same time (lib/run-reple.js#L150L24). Looks like all you have to do is have main set properly in package.json, since it uses Module._load().
If you run slc run app.js you get no benefit as far as I can tell: lib/commands/run.js#30.
Yay open source! https://github.com/strongloop/strong-cli
One of my favorite features is 'slc debug app.js' which brings up node-inspector for debugging . its nice CLI sugar. But of course you can totally run node and configure this manually.
I created a Linux init.d Daemon script which you can use to run your app with slc as service:
https://gist.github.com/gurdotan/23311a236fc65dc212da
Might be useful to some of you.
slc run
it can be only used for strong loop application
while node . or node [fileName]
can be used to execute any Nodejs file
I want to deploy node.js app which depends on redis. Both processes will run on the same VPS. There are plenty of examples on how to daemonize and monitor node and I've also found some uncommented configuration for redis. How do I put it together? Can I just combine these two snippets in one monitrc file?
You could use Supervisord to orchestrate the launch of Redis and your NodeJS apps (use the priority parameter to start Redis before your apps). Supervisord will automatically restart your NodeJS app if they crash.
You can then setup monit over it to be alerted when something wrong happens and to restart your NodeJS processes if they use too much memory/cpu or if they are not accessible anymore from a specific port.