Make a process running in background in Linux - linux

I am developing an Linux application using Python3. This application synchronizes the user's file with the cloud. The file are in a specific folder. I want that a process or daemon should run in background and whenever there is a change in that folder, It should start synchronization process.
I have made modules in Python3 for synchronization but I don't know that How to run a process in background which should automatically detect the changes in that folder? This process should always run in background and should be started automatically after boot.

You have actually asked two distinct questions. Both have simple answers and plenty of good resources online, so I'm assuming you simply did not know what to look for.
Running a process in the background is called "daemonization". Search for "writing a daemon in python". This is a standard technique for all Posix based systems.
Monitoring a directory for changes is done through an API set called inotify. This is Linux specific, as each OS has its own solution.

Related

Generate background process with CLI to control it

To give context, I'm building an IoT project that requires me to monitor some sensor inputs. These include Temperature, Fluid Flow and Momentary Button switching. The program has to monitor, report and control other output functions based on those inputs but is also managed by a web-based front-end. What I have been trying to do is have a program that runs in the background but can be controlled via shell commands.
My goal is to be able to do the following on a command line (bash):
pi#localhost> monitor start
sensors are now being monitored!
pi#localhost> monitor status
Temp: 43C
Flow: 12L/min
My current solution has been to create two separate programs, one that sits in the background, and the other is just a light-weight CLI. The background process listens to a bi-directional Linux Socket File which the CLI uses to send it commands. It then sends responses back through said socket file for the CLI to then process/display. This has given me many headaches but seemed the better option compared to using network sockets or mapped memory. I just have occasional problems with the socket file access when my program is improperly terminated which then requires me to "clean" the directory by manually deleting the socket file.
I'm also hoping to have the program insure there is only ever one instance of the monitor program running at any given time. I currently achieve this by capturing my pid and saving it to a file which I can look for when my program is starting. If the file exists, I self terminate with error. I really don't like this approach as it just feels too hacky for me.
So my question: Is there a better way to build a background process that can be easily controlled via command line? or is my current solution likely the best available?
Thanks in advance for any suggestions!

Application file tracing on the runtime

I'm working on a software that I must add some new features on it. Its on Linux and runs on the terminal.
What I want to know is how can I find out all the files that being used when I run a certain command.
Note that The application is so large has a lot of files for different daemons. I'm concerned about one daemon and I want to know what files from the app are responsible about running this daemon.
Anyone have a clue about this ?
Thanks

Run the application at startup in linux

I am making a Linux application. This application synchronizes the client's files and folders with cloud. There is folder in home directory in which all the files from cloud will be synchronized. I want that the application should be started in background after boot and work in background automatically.
How can I do it?
If you have systemd you can create a service as shown here.
Otherwise you have to use init.
If you have what is essentially a single-user system, you can use init/systemd to start background processes as a nominated, unprivileged user. However, that isn't the usual use of these techniques.
In a multi-user, graphical system, you probably want user-related background processes to start when the user's desktop session starts. Not only is this (usually) the proper timing for such operations, it allows multiple users to be supported.
The various graphical desktops available for Linux all provide slightly different ways to run user applications at login. It's probably impossible to find a method that will work for all desktops. For full coverage, you probably need to implement something that detects what desktop is in use, and uses the method appropriate to that desktop.
However, many desktops respect the use of $HOME/.config/autostart/. Files in that directory should have a .desktop extension, and be of the same format as application launchers. For example:
[Desktop Entry]
Name=MyThingie
GenericName=foo
Comment=foo
Exec=/path/to/my/executable
Terminal=false
Type=Application
Icon=foo
Categories=Network;FileTransfer;
StartupNotify=false

Automating services with Linux OS starting up and shutting down

I have a script to start and stop my services. My server is based on Linux. How do I automate the process such that when OS is shutdown the stop script runs and when it is starting up, the start script runs?
You should install init script for your program. The standard way is to follow Linux Standards Base section 20 subsections 2-8
The idea is to create a script that will start your application when called with argument start, stop it when called with argument stop, restart it when called with argument restart and make it reload configuration when called with argument reload. This script should be installed in /etc/init.d and linked in various /etc/rd.* directories. The standard describes a comment to put at the beginning of the script and a uitlity to handle the installation.
Please, refer to the documentation; it is to complicated to explain everything in sufficient detail here.
Now that way should be supported by all Linux distribution. But Linux community is currently searching for better init system and there are two new, improved, systems being used:
systemd is what most of the world seems to be going to
upstart is a solution Ubuntu created and sticks to so far
They provide some better options like ability to restart your application when it fails, but your script will then be specific to the chosen system.

NodeJS: how to run three servers acting as one single application?

My application is built with three distincts servers: each one of them serves a different purpose and they must stay separated (at least, for using more than one core). As an example (this is not the real thing) you could think about this set up as one server managing user authentication, another one serving as the game engine, another one as a pubsub server. Logically the "application" is only one and clients connect to one or another server depending on their specific need.
Now I'm trying to figure out the best way to run a setup like this in a production environment.
The simplest way could be to have a bash script that would run each server in background one after the other. One problem with this approach would be that in the case I need to restart the "application", I should have saved each server's pid and kill each one.
Another way would be to use a node process that would run each servers as its own child (using child_process.spawn). Node spawning nodes. Is that stupid for some reason? This way I'd have a single process to kill when I need to stop/restart the whole application.
What do you think?
If you're on Linux or another *nix OS, you might try writing an init script that start/stop/restart your application. here's an example.
Use specific tools for process monitoring. Monit for example can monitor your processes by their pid and restart them whenever they die, and you can manually restart each process with the monit-cmd or with their web-gui.
So in your example you would create 3 independent processes and tell monit to monitor each of them.
I ended up creating a wrapper/supervisor script in Node that uses child_process.spawn to execute all three processes.
it pipes each process stdout/stderr to the its stdout/stderr
it intercepts errors of each process, logs them, then exit (as it were its fault)
It then forks and daemonize itself
I can stop the whole thing using the start/stop paradigm.
Now that I have a robust daemon, I can create a unix script to start/stop it on boot/shutdown as usual (as #Levi says)
See also my other (related) Q: NodeJS: will this code run multi-core or not?

Resources