Why use nohup when the app can be run as system service? - linux

I put this question on stackoverflow, because I found lots of quesions on the topic here already.
Short introduction
Simply put, nohup can be used to run apps in the background and keeps them running after the user logs off or the terminal or ssh session is closed e.g.
There are many example quesitons here on stackoverflow, like this or that.
My question is simple.
Why opt for nohup, when there are options like upstart, systemd, ... which manage the app as service in a much more convenient way (runlevels, ...)?
Reading the many questions on similar topics, the only option seems to be nohup. Almost never the answer is something like: "... use an upstart script, so it is all handled for you..."
I would mainly go with e.g. upstart, except maybe for a quick and dirty test scenario.
Am I missing something important?

nohup is quick, easy, doesn't require root access and doesn't make permanent changes in the system. That's why many people use it (or try to use it) instead of configuring services.
Running things in the background without any supervision is usually a bad idea, although there are many legitimate use cases which don't fit traditional service model. For example:
Background process might be needed only sometimes, after certain user actions.
More than one instance might be needed. For example: one per user or one per session.
Process might not need to to be running all the time. It just quits after doing its job.
Some real world examples (which use something like nohup and would be hard to implement as system services):
git will sometimes run git gc in background to optimize repository without blocking user work
adb will start its service in background and keep it running until user asks to terminate it
Some compilers have option to keep running in the background to reduce startup times of subsequent invocations

Your understanding is correct, the way those questions were asked the natural answer is nohup - upstart would be the answer to a different question such as How to make sure an application keeps running on Linux

Related

Is there a way to nullify SIGSTOP for a certain script?

I have created a script and I want it to be virtually "immune" to SIGSTOP.
I understand that both SIGKILL and SIGSTOP cannot be trapped or ignored.
I know that the init system for Linux cannot receive a "fatal" signal due to it having the SIGNAL_UNKILLABLE flag on its signal struct flags (although the latter half of that sentence flies over my head for the most part).
I'm willing to edit my kernel to grant this script immunity, the only problem is that I don't know how.
So, my question is, is there a way to nullify SIGSTOP for a certain script/process?
I was able to deal with SIGKILL thanks to the Restart parameter in the service file for my script (using systemd), and while I have scrolled through the manuals looking for something similar for suspended processes, I haven't found anything yet.
Is there anything similar to Restart=always for process suspension caused by SIGSTOP?
I would rather not have to go through the process of changing things in or related to the kernel, but if it's the only way I will.
Thanks.
Okay so the best solution I can come up with is SELinux.
SELinux is a kernel add-on created by the NSA that was later released to the public. It is commonly used on Linux systems and comes by default on Android devices. SELinux allows the creation of "contexts". Contexts are an additional label provided to files and processes that allow the subdivision of responsibility and permissions.
How does this solve your problem? Well, you can limit your SELinux permissions for your user processes (even for the root user) so that you're not even allowed to signal this other process at all. In fact, you can prevent any interaction with it whatsoever. If you'd like, you could go so far as to prevent yourself from even being able to turn SELinux off (although it's probably better that you don't if you can avoid it from an operational perspective). This is at some level probably the closest you'll get to a solution that is anywhere near the range of not-hackable. That being said, SELinux setup and configuration for this purpose is not exactly a walk in the park. Documentation is limited (but exists), distro-specific, and in some cases even esoteric. I do have some experience with SELinux myself.
Edit:
Doing some quick googling, it appears possible to install SELinux on Arch, but like most things on Arch, it requires some effort - more than should fit in a StackOverflow comment block. However I'll briefly describe your set of goals here once SELinux is installed:
Determine the context that you are currently in. Using the "id" command should provide this context.
Use a context process transition so that when you execute your script, that script runs in a new context. You will probably need to create a new context for your script to run in.
Create sepolicy rules allowing that script to interact with your processes however you need. Perhaps this includes the ability to kill other processes in a different context, or read from a tcp port using sniffing, etc. etc. You can use the audit2allow program to help you create these rules.
By default, SELinux denies anything it doesn't explicitly allow. Your goal now is to make sure that everything you might want to do on your system is allowed, and add policy rules to allow all those things. Looking at the SELinux audit logs is a great way to see everything SELinux is complaining about - it's your job to go through and convert all those audit failures into "allow" rules.
Once all that is done, just make sure not to "allow" whatever context your processes/shell start in from being able to kill or signal the context that your script runs in, and you should be done. Now trying to SIGSTOP or SIGKILL should generate a "Permission denied error".

Explain me the advantage to use Cronjob inside your code our outside your code?

i have to do a reptitive task in nodeJS and i've seen there is existing package like this one.
https://www.npmjs.com/package/node-cron
and the actual platform where i'm hosted propose inside cronjob.
https://www.netlify.com/docs/webhooks/
so my question is when it's more interessant to use the platform or a package.
thanks.
From the URL posted i didn't see any method of setting up a cron job using webhooks. Unless you were thinking of setting up a webhook that listens for a post which is sent using a linux cron job or the like?
Regardless, the actual question about using a platform or a package. They have pros and cons, but based purely on your question I would go with the platform.
If you choose to use a package you will have to write the code to call the package (which you need to test, maintain and run). You need to ensure that the node process is always up and running, if it dies or exits that it is re-spawned, that if the operating system reboots the node process gets kicked off again. All these problems are can be easily solved (PM2 for instance) but the fact is you need to think of the problems and solve them yourself or the cron job might not run when you want it to.
When using the platform you know that it is well tested, that it will work as documented, and that it will be resilient to failure modes that you might not be aware of.

How to run application on linux in the background but leave possibility to interact with it?

Requirements:
I want to run my application on linux in the background (at startup of course).
I want to be able to call start/stop/restart commands directly from console (it have to be simple just like for /etc/init.d - just call simple command directly from console).
I want to be able to call status - and I want that this command will somehow get the actual status of application returned by itself. I thought that I can call some method which returns String or just use stdin to send command but when I do noup .. &, or start-stop-daemon, then the stdin is detached. Is there a simple way to attach stdin back to the application (I've seen that I can create a pipe, but this is pretty complitated). Or what is the best way to communicate with application after it is started as a daemon (I can make a socket and connect through telnet for example, but I am looking for simpler solution and possibility to do it directly from console, without starting telnet first)? Ideally it will be great to get the possibility to send any command, but simple status will be sufficient (but again - it have to communicate with the application to get that status somnehow)
I have found many different answers. Some of them says to simply use nohup and &, and some others says that nohup and & is old fashion. Some answers says to use start-stop-daemon or JSvc (for java). But it seems that none of them will suffice this 3 requirements from me.
So... What are the simplest possibilities for all of 3 requirements to be met?
PS. I don't want to use screen. The application must be run as a linux daemon.
PPS. Application is written in Java but I am looking for generic soluction which is not limited to java.
You should create a command line tool for communicate with a daemon in way you need. The tool itself can use TCP/IP or named pipes.
And then use cli-tool start|stop|restart|status from console.
If you need to start a daemon at startup sequence (before user login) you have to deal with init system (init.d, systemd, OpenRC, etc...).
Dragons be here:
Be sure that init doesn't restart your daemon after manual stop via cli.
Command line tool itself runs with unprivileged user rights, so restart may be hard if first startup script use superuser rights or application-specific user and, especially in case deep init integration, you might have to use sudo cli-tool start.
To avoid this one possible solution is to make wrapper daemon, that runs forever via init and control the underlying application (start-stop) with proper rights.
Cons: Develop two additional tools for a daemon.
Pros: Wrapper daemon can operate as a circuit breaker between superuser/specific user and userspace.

Intricacies of Launching a complex shell script from CGI

Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.

How do I create an application that runs in the background and is interactive in Linux?

I want to create an application that runs in the background in Linux (daemon) that will basically at set times (5 times) play a music file or any sound given every single day. I want this daemon to start when the computer is started in terminal mode (non-GUI). I want to know if this is possible and if so, What considerations, tools, and programming language would be the most efficient in doing so? This will be a dedicated computer that will only be executing this task, so if any recommendations on how I can maximize efficiency while disabling other features that are not required for this task will be appreciated. Also, could you please explain how processes and tasks work in terminal (non-GUI)? I always thought terminal was something like CMD in Windows and can only run tasks one at a time.
EDIT: I need the sound to run at variable times, I'll be fetching these times from a website. Any suggestions regarding how to achieve this?
Thanks for the help and sorry for any shortcoming in the questions or my research.
Look at using cron to run your tasks. cron is a very flexible scheduling utility built in to most Linux distributions.
Basically, with cron you specify a task to run (your main program, or maybe just a sound-playing program), all of its arguments, and when it runs. cron takes care of running it, and will even send you "mail" if the job produces any output (such as errors).
You can make processes fork into a subprocess of your terminal, i.e. you are able to run more than one task at a time by putting a & after your terminal command:
> cmd&
> [you can type other commands here but the "cmd" program is still running]
However, for services you generally don't have to worry about starting it as a subprocess because the system already knows to do this. Here's a good question from Super User that has an example of a working service. Simply place your service as a shell script in the /etc/init.d and it will be automatically started as a service.

Resources