How to write a program which will be running automatically, whenever I turn on my computer, in Linux? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am creating a process every time but after "kill -9 -1", I lost the process that I created. I know why I lost it every time..
But is there anyway, so that I can make my program run automatically, every time I turn on my computer??
thanks,,

Most distributions still support SysV Init Scripts.
The easiest way to do it is to take a simple init script from /etc/init.d/ and change it to suit your needs:
sudo cp /etc/init.d/foo /etc/init.d/my_foo
sudo gedit /etc/init.d/my_foo
Then, you'll need to enable it:
sudo /sbin/chkconfig my_foo on
If chkconfig isn't available, you may need to install it. Also, there are LSB aliases like insserv which might be available.

Ubuntu systems now come with Upstart, whose configuration files may be a bit less verbose than with System V init scripts. A simple job configuration for Upstart would look like this, and go into, say, /etc/init/example.conf:
# this is a comment
start on startup
stop on shutdown
exec /path/to/program --some-args maybe-another-arg
Then it'll start and stop, well, on startup and on shutdown, respectively. To manually start and stop it, use the start and stop commands as root:
$ sudo start example
$ sudo stop example
You can find more information about Upstart configuration in its Cookbook. Information is also available in the init man page in section 5 on systems where Upstart is installed. (man 5 init)

Supervisor will let you do that, as well as having some other features like FastCGI support, automatically respawning services if they crash (while being smart about it and not restarting it if it keeps crashing over and over again), and keeping logs of its output.
After it is installed and itself configured to run on startup, you can modify its configuration file to add a section that runs a program. A simple example might be like this:
; this is a comment
[program:example]
command = /path/to/program --some-args maybe-another-arg
That's really all that's necessary for a simple program, but many other configuration options are available; see the documentation.
Once you've added your configuration, you can tell Supervisor to add/remove (and start/stop) any processes you've added or removed from the configuration:
$ sudo supervisorctl update
You can manually start and stop services if you want to, as well:
$ sudo supervisorctl start example
$ sudo supervisorctl stop example
$ sudo supervisorctl restart example
You can also see a nifty status display for all of your processes, e.g.:
$ sudo supervisorctl status
cgi-pass RUNNING pid 4223, uptime 68 days, 23:57:22
And also see what it's recorded of your program's output:
$ sudo supervisorctl tail example # stdout
$ sudo supervisorctl tail example stderr # stderr
$ sudo supervisorctl tail -f example # continuous
Documentation of the available commands is available with supervisorctl help.

Fedora comes with systemd, many other Linux distributions are adopting it (except Ubuntu and for now Debian). The package includes several helper programs you'd might want to look at.

Related

how to run process from batch script

I have simple batch script in linux debian - Debian GNU/Linux 6.0 - that stop process then deletes log files and start the process again :
#!/bin/bash
killall -KILL rsyslogd
sleep 5s
rm /var/log/syslog
rm /var/log/messages
rm /var/log/kern.log
sleep 3s
rsyslogd
exit
The process name is rsyslogd. I have to close it before deleting the log files, for linux to empty the space from disk.
I see that killall -KILL closes the process by its name, but what is the opposite - the run command?
Calling it by its name without any command seems to not work. I will be glad for any tips, thank you.
Debian uses systemd to manage processes. You should, therefore, use the systemd's commands to stop and start rsyslogd.
systemctl stop rsyslog
and
systemctl start rsyslog
If you are using really old versions of Debian (so old that you should upgrade), it may be possible that sys V is still used. In that case, there is a file under /etc/init.d which is called rc.rsyslog or something comparable (use ls /etc/init.d to find the exact name). In that case, it would be
sudo /etc/init.d/rc.rsyslog stop
and
sudo /etc/init.d/rc.rsyslog start
Or it may be, that your systemd-package may be broken. In that case, the package can be re-installed:
apt-get --reinstall install systemd
To start rsyslogd:
systemctl start rsyslog
To stop it:
systemctl stop rsyslog
If you want to do both, use
systemctl restart rsyslog

Why might I get this error on a script that has been running fine for a year? - sudo: sorry, you must have a tty to run sudo

I have a script that runs nightly. The userid is set up in sudoers to perform these functions. I do not intend to disable "Defaults requiretty", particularly without knowing why it's suddenly a problem now.
Here's what it does with sudo:
sudo lvcreate -- size 19000M –snapshot –name snap_u /dev/mapper/vg_u-lvu
sudo mount /dev/vg_u/snap_u /snapshot
sudo rsync -av --delete --bwlimit=12000 –exclude usr/spoolhold --exclude email --exclude tempfile /snapshot/ /u1/prev/dir
sudo umount /snapshot
sudo lvremove -f /dev/vg_u/snap_u
For the past few weeks it doesn't work most of the time. Sometimes when I run the commands "manually" it works fine. When it fails I see this message filling the log file:
sudo: sorry, you must have a tty to run sudo
The problem began when I switched some other scripts for a remote backup. The only things I changed in this script were comments. This script is invoked by an application program that uses ‘nohup’ to run it in the background.
During my testing I killed the process to stop it from running in the background when I wanted to run it again immediately. Since then I’ve had this problem. So, my questions are these:
Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
Any ideas for a solution?
1) Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
No
2) Any ideas for a solution?
This is related to requiretty configuration option in /etc/sudoers. It probably changed in there or in default during some of the updates. Set it to off and you should be good.

Script not starting on boot with start-stop-daemon

My script (located in /etc/init.d) is creating a pid file ($PIDFILE), but there is no process running. My daemon script includes:
start-stop-daemon --start --quiet --pidfile $PIDFILE -m -b --startas $DAEMON --test > /dev/null || return 1
The script works fine when executing it manually.
You need to create startup links.
sudo update-rc.d SCRIPT_NAME defaults
then reboot. SCRIPT_NAME is the name of the script in /etc/init.d (Without the path)
Was able to get it working, but tried so many things, don't know exactly what fixed it (probably an error in script or config). However, learned a lot and wanted to share since I can't find much of the same in the internet abyss.
It seems Ubuntu (and many other distros based on Ubuntu, including Mint) has migrated to Upstart for job and service management. Upstart includes SysVinit (using /etc/init.d daemons) compatibility that still can use update-rc.d to manage daemons (so if you are familiar with that usage, you can keep on using it). The Upstart method is to use a single .conf file in the /etc/init folder. My SCRIPT.conf file is very simple (I'm using a python script):
start on filesystem or runlevel [2345]
stop on runlevel [016]
exec python /usr/share/python-support/SCRIPT/SCRIPT.py
This simple file completely replaces the standard script in /etc/init.d with the case statement to provide [start|stop|restart|reload] functions and the pointer to /usr/bin/SCRIPT. You can see that it includes runlevel control that would normally be found in the /etc/rc*.d files (thus eliminating several files).
I tried update-rc.d to create the necessary /etc/rc*.d/ files for my daemon. My daemon bash script is located in /etc/init.d and includes the start-stop-daemon command as in my original question. (That command also works fine from terminal.)
I had /etc/rc*.d/ files, the bash script in /etc/init.d and /etc/init/SCRIPT.conf file during boot and it seems that Upstart likely first looks for the .conf file for its direction because the SysVinit command service SCRIPT [start|stop|restart|reload] returns Unknown Instance, however you can find the process is running with ps -elf | grep SCRIPT_FILE.
One interesting thing to note is the forking of your daemon when using .conf. The script as written above only spawns one fork of the daemon. However, total independence of the original script is possible by using expect fork or expect daemon and respawn (see the Upstart Cookbook for reference). Using these will ensure that your daemon will never be killed (at least by using the kill command).
I continued to test both my daemon and the boot process by utilizing the sudo initctl reload-configuration command. This reloads the conf files where you can test your daemon by the sudo [start|stop|restart] SCRIPT command. The result of the start command is:
$ sudo start SCRIPT
SCRIPT start/running, process xxxx
$ sudo restart SCRIPT
SCRIPT start/running, process xxxx
$ sudo stop SCRIPT
SCRIPT stop/waiting
Also, there is a nice log in /var/log/upstart/SCRIPT.log that gives you useful information for your daemon during boot. Mine still has a very annoying bug that prevents root from displaying osd messages with notify-send from my daemon. My log file includes a gtk warning (I will open another question to solicit help).
Hope this helps others in developing their daemons.

Write a bash script to restart a daemon [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I thought I could just use this related question: How Do I write a bash script to restart a process if it dies. #lhunath had a great answer and told me everything I might do about it was wrong, but I'm restarting a daemon process and if I'm hoping there's something I can do in a single script that works.
my process starts with a kick off script that shows the startup log, but then quits and leaves the process running off the shell:
>sudo ./start
R CMD Rserve --RS-conf /var/FastRWeb/code/rserve.conf --vanilla --no-save
...
Loading required package: FastRWeb
FastRWeb: TRUE
Loading data...
Rserv started in daemon mode.
>
The process is up and running,
ps -ale | grep Rserve
1 S 33 16534 1 0 80 0 - 60022 poll_s ? 00:00:00 Rserve
Is there a simple way to wrap or call the 'start' script from bash and restart when the process dies or is this a case where PID files are actually called for?
Dang - question got closed even after pointing to a very similar question that was not closed on stackoverflow. you guys suck
A very simple way to monitor the program is to use cron: check every minute (or so) if the program still is alive, ./start it otherwise.
As root, invoke crontab -e.
Append a line like this:
* * * * * if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
This method will stay persistent, i.e., it will be executed after a reboot etc. If this is not what you want, move it to a shell script:
#! /bin/bash
# monitor.sh
while true; do
if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
sleep 10
done
This script has to be started manually from the command line, and can be easily stopped with Ctrl-C.
The easiest solution, if you can run the process is NON-daemon mode, is to wrap it in a script.
#!/bin/bash
while (true)
do
xmessage "This is your process. Click OK to kill and respawn"
done
Edit
Many deamons leave a lock file, usually in /var/lock, that contains their PID. This keeps multiple copies of the deamon from running.
Under Linux, it is fairly simple to look throgh /proc and see if that process is still around.
Under other platforms you may need to play games with ps to check for the processes existence.

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources