Shell script only starting applications when used through ssh - linux

What can cause .sh scripts to work fine through an SSH shell, but not when executed through either PHP or crontab?
I have a VPS where I run game servers on, but in order to make it maintainable, I am planning on automating much of the tedious processes (like setting up or deleting the server) and making important features (like starting and stopping servers) easily acceptable for the ones who actually need it.
Now, when I made the shell scripts and tested them, they worked absolutely fine. startserver started the server, restartserver restarted it, etc. But when run from PHP, or - as I later figured out - crontab, starting servers magically does not work. Stopping them, checking if they are running, updating and all other features worked like intended, but starting a server just did not do anything. It just returned 0 while printing nothing.
For example, here is an example of a script which works in either case: (statusserver.sh)
/sbin/start-stop-daemon -v -t --start --exec ~mta/servers/$1/files/mta-server -- -d
And here is one which does not work in any case: (startserver.sh)
/sbin/start-stop-daemon -v --start --exec ~mta/servers/$1/files/mta-server -- -d
The only difference is that statusserver.sh has "-t", which will only tell you if doing the same command without -t will actually be successful. And executing statusserver.sh like so:
sudo -u mta ~mta/sh/statusserver.sh test
Indeed does work, printing something along the lines of "Would start ~mta/servers/test/files/mta-server -d". But doing this:
sudo -u mta ~mta/sh/startserver.sh $2
Does absolutely nothing. It does not print anything, and it actually returns 0. (which is supposed to mean the operation was successful)
Now for the fun part: When the server is already running, startserver.sh will do what it is supposed to do: Say that the server is already running, and returning an error code. (Because start-stop-daemon is kind enough to do that for me) But it flat out refuses to launch anything.
Replacing start-stop-daemon with something like:
sudo -u mta ~mta/servers/test/files/mta-server -d
Does exactly the same thing: It will just refuse to run, while still returning 0.
Oh by the way, it's not a sudo problem. Of that I am quite sure, since the following works fine too
sudo -u web1 sudo -u mta ~mta/scripts/startserver.sh test
So back to my question: What can cause Linux, Shell, Bash or whatever to flat out refuse to start an application when run through either PHP or crontab, while happily accepting it when launched through SSH? Is there any setting I need to switch? Any package that can be blocking up what I want to do? Any other thing I am just missing?

Look into using sudo.
Set up /etc/sudoer (using visudo) for the user that Apache runs as (usually for the 'nobody' user, or 'apache' user) as this is what Apache usually runs as. Grant sudo access to the commands you want to run, with the NOPASSWD option.
In your PHP script, use exec() to execute the commands to start/stop daemons and prefix the commands with the sudo command.
Here is an article about sudo:
http://www.cyberciti.biz/tips/allow-a-normal-user-to-run-commands-as-root.html

As I think Justin was touching on, but didn't say specifically, it would seem the problem of not being able to run the script is that the apache user account (which is generally pretty limited on purpose) can't see into the user's home directory because of the permissions. Generally only the user and root can see into their own home directory. You can do a few things, sudo to run the script in the home directory, move it out of the user's home directory or possibly change permissions on the scripts/homes so they can be run in the user's home directory by apache.

Related

Alias <cmd> to "do X then <cmd>" transparently

The title sucks but I'm not sure of the correct term for what I'm trying to do, if I knew that I'd probably have found the answer by now!
The problem:
Due to an over-zealous port scanner (customer's network monitor) and an overly simplistic telnet daemon (busybox linux) every time port 23 gets scanned, telnetd launches another instance of /bin/login waiting for user input via telnet.
As the port scanner doesn't actually try to login, there is no session, so there can be no session timeout, so we quickly end up with a squillion zombie copies of /bin/login running.
What I'm trying to do about it:
telnetd gives us the option (-l) of launching some other thing rather than /bin/login so I thought we could replace /bin/login with a bash script that kills old login processes then runs /bin/login as normal:
#!/bin/sh
# First kill off any existing dangling logins
# /bin/login disappears on successful login so
# there should only ever be one
killall -q login
# now run login
/bin/login
But this seems to return immediately (no error, but no login prompt). I also tried just chaining the commands in telnetd's arguments:
telnetd -- -l "killall -q login;/bin/login"
But this doesn't seem to work either (again - no error, but no login prompt). I'm sure there's some obvious wrinkle I'm missing here.
System is embedded Linux 2.6.x running Busybox so keeping it simple is the greatly preferred option.
EDIT: OK I'm a prat for not making the script executable, with that done I get the login: prompt but after entering the username I get nothing further.
Check that your script has the execute bit set. Permissions should be the same as for the original binary including ownership.
As for -l: My guess is that it tries to execute the command killall -q login;/bin/login (that's one word).
Since this is an embedded system, it might not write logs. But you should check /var/log anyway for error messages. If there are none, you should be able to configure it using the documentation: http://wiki.openwrt.org/doc/howto/log.overview
Right, I fixed it, as I suspected there was a wrinkle I was missing:
exec /bin/login
I needed exec to hand control over to /bin/login rather than just call it.
So the telnet daemon is started thusly:
/usr/sbin/telnetd -l /usr/sbin/not_really_login
The contents of the not-really-login script are:
#!/bin/sh
echo -n "Killing old logins..."
killall -q login
echo "...done"
exec /bin/login
And all works as it should, on telnet connect we get this:
**MOTD Etc...**
Killing old logins......done
login: zero_cool
password:
And we can login as usual.
The only thing I haven't figured out is if we can detect the exit-status of /bin/login (if we killed it) and print a message saying Too slow, sucker! or similar. TBH though, that's a nicety that can wait for a rainy day, I'm just happy our stuff can't be DDOS'ed over Telnet anymore!

How can I prevent a daemon started over SSH from terminating at logout?

EDIT this is fixed. See my answer below.
I have a headless server running transmission-daemon on Angstrom Linux. I am able to SSH into the machine and invoke transmission-daemon via this init script; however, the process terminates as soon as I log out.
The command issued in the script is:
start-stop-daemon --chuid transmission --start --pidfile /var/run/transmission-daemon.pid --make-pidfile --exec /usr/local/bin/transmission-daemon --background -- -f
After starting the daemon via # /etc/init.d/transmission-daemon start, I can verify using ps that the process is owned by the user transmission (which is not the user I am logging in as via SSH).
I've tried every variation of the above command that I am aware of, including:
With and without the --background option for start-stop-daemon
Appending > /dev/null 2>&1 & to the start-stop-daemon command (source -- although there seems to be mixed results in that thread as to whether this is the right approach)
Appending > /dev/null 2>&1 & </dev/null & (source)
Adding & to the end of the command
Using nohup
None of these seems to work -- the result is always the same: the process exits immediately after I close the SSH session.
What can/should I do to keep the daemon running after I disconnect the SSH session?
Have you tried using GNU Screen?
It allows you to keep your session open even if you disconnect (but not if you exit).
It's a simple case of:
apt-get install screen
or
yum install screen
Since I cannot leave comments yet :), here is a good site that explains some functions of Screen, http://www.tecmint.com/screen-command-examples-to-manage-linux-terminals/
I use screens all the time, to do exactly what you are talking about. You open a screen, in the terminal, do what you need to do, then you can log off and your process will still be running.
sudo loginctl enable-linger your_user
# This allows users who are not logged in to run long-running
# service after ssh session ends
This is now resolved. Here's the background: at some point prior to running into this problem, something happened to my $PATH (I don't recall what) and the location where transmission-daemon lived (/sbin) was removed. Under the mistaken impression that transmission-daemon was no longer present on the system, I installed again from an ipk. This is the state the system was in when I initially asked this question.
I don't know why it made a difference, but once I corrected my $PATH and started running transmission-daemon installed at /sbin, everything worked again. The daemon keeps running after I log out.

Run script with rc.local: script works, but not at boot

I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.

Best practice to run Linux service as a different user

Services default to starting as root at boot time on my RHEL box. If I recall correctly, the same is true for other Linux distros which use the init scripts in /etc/init.d.
What do you think is the best way to instead have the processes run as a (static) user of my choosing?
The only method I'd arrived at was to use something like:
su my_user -c 'daemon my_cmd &>/dev/null &'
But this seems a bit untidy...
Is there some bit of magic tucked away that provides an easy mechanism to automatically start services as other, non-root users?
EDIT: I should have said that the processes I'm starting in this instance are either Python scripts or Java programs. I'd rather not write a native wrapper around them, so unfortunately I'm unable to call setuid() as Black suggests.
On Debian we use the start-stop-daemon utility, which handles pid-files, changing the user, putting the daemon into background and much more.
I'm not familiar with RedHat, but the daemon utility that you are already using (which is defined in /etc/init.d/functions, btw.) is mentioned everywhere as the equivalent to start-stop-daemon, so either it can also change the uid of your program, or the way you do it is already the correct one.
If you look around the net, there are several ready-made wrappers that you can use. Some may even be already packaged in RedHat. Have a look at daemonize, for example.
After looking at all the suggestions here, I've discovered a few things which I hope will be useful to others in my position:
hop is right to point me back
at /etc/init.d/functions: the
daemon function already allows you
to set an alternate user:
daemon --user=my_user my_cmd &>/dev/null &
This is implemented by wrapping the
process invocation with runuser -
more on this later.
Jonathan Leffler is right:
there is setuid in Python:
import os
os.setuid(501) # UID of my_user is 501
I still don't think you can setuid
from inside a JVM, however.
Neither su nor runuser
gracefully handle the case where you
ask to run a command as the user you
already are. E.g.:
[my_user#my_host]$ id
uid=500(my_user) gid=500(my_user) groups=500(my_user)
[my_user#my_host]$ su my_user -c "id"
Password: # don't want to be prompted!
uid=500(my_user) gid=500(my_user) groups=500(my_user)
To workaround that behaviour of su and runuser, I've changed my init script to something like:
if [[ "$USER" == "my_user" ]]
then
daemon my_cmd &>/dev/null &
else
daemon --user=my_user my_cmd &>/dev/null &
fi
Thanks all for your help!
Some daemons (e.g. apache) do this by themselves by calling setuid()
You could use the setuid-file flag to run the process as a different user.
Of course, the solution you mentioned works as well.
If you intend to write your own daemon, then I recommend calling setuid().
This way, your process can
Make use of its root privileges (e.g. open log files, create pid files).
Drop its root privileges at a certain point during startup.
Just to add some other things to watch out for:
Sudo in a init.d script is no good since it needs a tty ("sudo: sorry, you must have a tty to run sudo")
If you are daemonizing a java application, you might want to consider Java Service Wrapper (which provides a mechanism for setting the user id)
Another alternative could be su --session-command=[cmd] [user]
on a CENTOS (Red Hat) virtual machine for svn server:
edited /etc/init.d/svnserver
to change the pid to something that svn can write:
pidfile=${PIDFILE-/home/svn/run/svnserve.pid}
and added option --user=svn:
daemon --pidfile=${pidfile} --user=svn $exec $args
The original pidfile was /var/run/svnserve.pid. The daemon did not start becaseu only root could write there.
These all work:
/etc/init.d/svnserve start
/etc/init.d/svnserve stop
/etc/init.d/svnserve restart
Some things to watch out for:
As you mentioned, su will prompt for a password if you are already the target user
Similarly, setuid(2) will fail if you are already the target user (on some OSs)
setuid(2) does not install privileges or resource controls defined in /etc/limits.conf (Linux) or /etc/user_attr (Solaris)
If you go the setgid(2)/setuid(2) route, don't forget to call initgroups(3) -- more on this here
I generally use /sbin/su to switch to the appropriate user before starting daemons.
Why not try the following in the init script:
setuid $USER application_name
It worked for me.
I needed to run a Spring .jar application as a service, and found a simple way to run this as a specific user:
I changed the owner and group of my jar file to the user I wanted to run as.
Then symlinked this jar in init.d and started the service.
So:
#chown myuser:myuser /var/lib/jenkins/workspace/springApp/target/springApp-1.0.jar
#ln -s /var/lib/jenkins/workspace/springApp/target/springApp-1.0.jar /etc/init.d/springApp
#service springApp start
#ps aux | grep java
myuser 9970 5.0 9.9 4071348 386132 ? Sl 09:38 0:21 /bin/java -Dsun.misc.URLClassPath.disableJarChecking=true -jar /var/lib/jenkins/workspace/springApp/target/springApp-1.0.jar

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources