I was looking if there are any module that was able to start a process, in my case it would be a Binary File on a Unix system. I should be able to start, stop, restart and get the process status and if possible detect in real time when the process state changes (starting up or shutting down), I found some solutions but were not suitable, for example PM2 (they have a programmatically API) but I could not launch the binary file, I belive it only works for Node.js written applications. If someone knows a module or a simple and fast way to achieve this I would be thankfull.
Related
I'm trying to create a simple Twitter bot to learn some Node.js skills.
It works fine on my local computer. I start the script with node bot.js and then close it with Ctrl + C.
I've uploaded the files to a server (Krystal hosting). I've ssh'd into the server and then used $ source /home/[username]/nodevenv/twitterbot/10/bin/activate. Which I think puts me into a Node environment (I'm not really clear what is happening here).
From here I can run node bot.js. My Twitter bot runs fine and I can leave the terminal. What I've realised now is that I don't know how to stop this script.
Can someone explain how I should be doing this? Is there a command I can enter to stop the original bot.js process? Since looking into this it looks like perhaps I should have used something like pm2 process manager. Is this correct?
Any pointers would be much appreciated.
Thanks,
B
You can kill it externally by nuking the process from an OS command line or in an OS GUI (exact procedure varies by OS). Ctrl-C from the shell is one version of this, but it can be done without the command shell that it was started in too by nuking the process directly.
Or, you can add a control port (a simple little http server running on a port only accessible locally) that accepts commands that let you do all sorts of things with the server such as extract statistics, shut it down, change the configuration, tell it to clear caches so content updates take effect immediately, etc... Shutting down the server this way allows for a more orderly shut-down from code within the server. You can even stop accepting incoming connections, but wait for existing http connections to complete before shutting down, close databases normally, etc...
Or, you can use a monitoring program such as PM2 or forever that in addition to restarting the server automatically if it should ever crash, they also offer commands for shutting it down too (which will just send it certain signals kind of like Ctrl-C does).
To give context, I'm building an IoT project that requires me to monitor some sensor inputs. These include Temperature, Fluid Flow and Momentary Button switching. The program has to monitor, report and control other output functions based on those inputs but is also managed by a web-based front-end. What I have been trying to do is have a program that runs in the background but can be controlled via shell commands.
My goal is to be able to do the following on a command line (bash):
pi#localhost> monitor start
sensors are now being monitored!
pi#localhost> monitor status
Temp: 43C
Flow: 12L/min
My current solution has been to create two separate programs, one that sits in the background, and the other is just a light-weight CLI. The background process listens to a bi-directional Linux Socket File which the CLI uses to send it commands. It then sends responses back through said socket file for the CLI to then process/display. This has given me many headaches but seemed the better option compared to using network sockets or mapped memory. I just have occasional problems with the socket file access when my program is improperly terminated which then requires me to "clean" the directory by manually deleting the socket file.
I'm also hoping to have the program insure there is only ever one instance of the monitor program running at any given time. I currently achieve this by capturing my pid and saving it to a file which I can look for when my program is starting. If the file exists, I self terminate with error. I really don't like this approach as it just feels too hacky for me.
So my question: Is there a better way to build a background process that can be easily controlled via command line? or is my current solution likely the best available?
Thanks in advance for any suggestions!
I am developing an Linux application using Python3. This application synchronizes the user's file with the cloud. The file are in a specific folder. I want that a process or daemon should run in background and whenever there is a change in that folder, It should start synchronization process.
I have made modules in Python3 for synchronization but I don't know that How to run a process in background which should automatically detect the changes in that folder? This process should always run in background and should be started automatically after boot.
You have actually asked two distinct questions. Both have simple answers and plenty of good resources online, so I'm assuming you simply did not know what to look for.
Running a process in the background is called "daemonization". Search for "writing a daemon in python". This is a standard technique for all Posix based systems.
Monitoring a directory for changes is done through an API set called inotify. This is Linux specific, as each OS has its own solution.
My application is built with three distincts servers: each one of them serves a different purpose and they must stay separated (at least, for using more than one core). As an example (this is not the real thing) you could think about this set up as one server managing user authentication, another one serving as the game engine, another one as a pubsub server. Logically the "application" is only one and clients connect to one or another server depending on their specific need.
Now I'm trying to figure out the best way to run a setup like this in a production environment.
The simplest way could be to have a bash script that would run each server in background one after the other. One problem with this approach would be that in the case I need to restart the "application", I should have saved each server's pid and kill each one.
Another way would be to use a node process that would run each servers as its own child (using child_process.spawn). Node spawning nodes. Is that stupid for some reason? This way I'd have a single process to kill when I need to stop/restart the whole application.
What do you think?
If you're on Linux or another *nix OS, you might try writing an init script that start/stop/restart your application. here's an example.
Use specific tools for process monitoring. Monit for example can monitor your processes by their pid and restart them whenever they die, and you can manually restart each process with the monit-cmd or with their web-gui.
So in your example you would create 3 independent processes and tell monit to monitor each of them.
I ended up creating a wrapper/supervisor script in Node that uses child_process.spawn to execute all three processes.
it pipes each process stdout/stderr to the its stdout/stderr
it intercepts errors of each process, logs them, then exit (as it were its fault)
It then forks and daemonize itself
I can stop the whole thing using the start/stop paradigm.
Now that I have a robust daemon, I can create a unix script to start/stop it on boot/shutdown as usual (as #Levi says)
See also my other (related) Q: NodeJS: will this code run multi-core or not?
I am trying to set up a development environment for node.js. I assumed at first that it requires something similar to the traditional, "localhost" server approach. But I found myself at a loss. I managed to start a node.js hello world app from the terminal. Which doesn't looked like a big deal - having to start an app from the console isn't that hard. But, after some tweaking, I found out that the changes aren't shown in the browser immediately - you need to "node [appName here]" it again to run.
So, my question is:
Is there a software or a tutorial on how to create a more "traditional" development server on your local machine? Along with port listening setup, various configurations, root directories etc (things that are regular in stacks like XAMMP, BitNami or even the prepackaged Ubuntu LAMP). Since I'm new at node.js, I can't really be sure I'm even searching for the right things on google.
Thanks.
Take a look at :
https://github.com/remy/nodemon
it'll allow you to do - nodemon app.js
and the server will restart automatically in case of failure.
To do this I built a relatively small tool in NodeJS that allows me to start/stop/restart a NodeJS child process (which contains the actual server) and see/change configuration option and builds/versions of the application, with admin options available on a different tcp port. It also monitors said child process to automatically respawn it if there was a error (and after x failed attempts stops trying and contacts me).
While I'm prohibited from sharing source code, this requires the (built-in) child_process module, which has a spawn method that returns a child process I guess, which contains a pid (process id) which you can use with the kill method to kill said child process. Instead of killing it you could also work with SIGINT an catch it within your child application to first clean up some stuff and then exit. It's relatively easy to do.
Some nice reading material regarding this.
http://nodejs.org/docs/v0.5.5/api/child_processes.html