How to run a cron job as a non-root user and log the job's output? - linux

Docker best practices state:
If a service can run without privileges, use USER to change to a non-root user.
In the case of cron, that doesn't seem practical as cron needs root privileges to function properly. The executable that cron runs, however, does NOT need root privileges. Therefore, I run cron itself as the root user, but call my crontab script to run the executable (in this case, a simple Python FTP download script I wrote) as a non-root user via the crontab -u <user> command.
The cron/Docker interactability and community experience still seems to be in its infancy, but there are some pretty good solutions out there. Utilizing lessons gleaned from this and this great posts, I arrived at a Dockerfile that looks something like this:
FROM python:3.7.4-alpine
RUN adduser -S riptusk331
WORKDIR /home/riptusk331
... boilerplate not necessary to post here ...
COPY mycron /etc/cron.d/mycron
RUN chmod 644 /etc/cron.d/mycron
RUN crontab -u riptusk331 /etc/cron.d/mycron
CMD ["crond", "-f", "-l", "0"]
and the mycron file is just a simple python execution running every minute
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py
This works perfectly fine, but I am unsure of how exactly logging is being handled here. I do not see anything saved in /var/log/cron. I can see the output of cron and ftp.py on my terminal, as well as in the container logs if I pull it up in Kitematic. But I have no idea what is actually going on here.
So my first question(s) are: how is logging & output being handled here (without any redirects after the cron job), and is this implementation method ok & secure?
VonC's answer to this post suggests appending > /proc/1/fd/1 2>/proc/1/fd/2 to your cron job to redirect output to Docker's stdout and stderr. This is where I both get a little confused, and run into trouble.
My crontab file now looks like this
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py > /proc/1/fd/1 2>/proc/1/fd/2
The output without any redirection appeared to be going to stdout/stderr already, but I am not entirely sure. I just know it was showing up on my terminal. So why would this redirect be needed?
When I add this redirect, I run into permissioning issues. Recall that this crontab is being called as the non-root user riptusk331. Because of this, I don't have root access and get the following error:
/bin/ash: can't create /proc/1/fd/1: Permission denied

The Alpine base images are based on a compact tool set called BusyBox and when you run crond here you're getting the BusyBox cron and not any other implementation. Its documentation is a little sparse, but if you look at the crond source (in C) what you'll find is that there is not any redirection at all when it goes to run a job (see the non-sendmail version of start_one_job); the job's stdout and stderr are crond's stdout and stderr. In Docker, since crond is the container primary process, that in turn becomes the container's output stream.
Anything that shows up in docker logs definitionally went to stdout or stderr or the container's main process. If this cron implementation wrote your job's output directly there, there's nothing wrong or insecure with taking advantage of that.
In heavier-weight container orchestration systems, there is some way to run a container on a schedule (Kubernetes CronJobs, Nomad periodic jobs). You might find it easier and more consistent with these systems to set up a container that runs your job once and then exits, and then to set up the host's cron to run your container (necessarily, as root).

You need to allow the CAP_SETGID to run crond as user, this can be a security risk if it is set to all busybox binary but you can use dcron package instead of busybox's builtin crond and set the CAP_SETGID just on that program. Here is what you need to add for Alpine, using riptusk331 as running user
USER root
# crond needs root, so install dcron and cap package and set the capabilities
# on dcron binary https://github.com/inter169/systs/blob/master/alpine/crond/README.md
RUN apk add --no-cache dcron libcap && \
chown riptusk331:riptusk331 /usr/sbin/crond && \
setcap cap_setgid=ep /usr/sbin/crond
USER riptusk331

Related

Is there a way to make crontab run a gnu screen session?

I have a discord bot running on a raspberry pi that i need to restart every day, I'm trying to do this through crontab but with no luck.
It manages to kill the active screen processes but never starts an instance, not that I can see with "screen -ls" and I can tell that it doesn't create one that I can't see as the bot itself does not come online.
Here is the script:
#!/bin/bash
sudo pkill screen
sleep 2
screen -dmS discordBot
sleep 2
screen -S "discordBot" -X stuff "node discordBot/NEWNEWNEWN\n"
Here is also the crontab:
0 0 * * * /bin/bash /home/admin/discordBot/script.sh
Is it possible to have crontab run a screen session? And if so how?
Previously I tried putting the screen command stright into cron but now I have it in a bash script instead.
If I run the script in the terminal it works perfectly, it’s just cron where it fails. Also replacing "screen" with the full path "/usr/bin/screen" does not change anything.
So the best way of doing it, without investigating the underlying problem would be to create a systemd service and putting a restart command into cron.
 
/etc/systemd/system/mydiscordbot.service:
[Unit]
Description=A very simple discord bot
[Service]
Type=simple
ExecStart=node /path/to/my/discordBot.js
[Install]
WantedBy=multi-user.target
Now you can run your bot with systemctl start mydiscordbot and can view your logs with journalctl --follow -u mydiscord bot
Now you only need to put
45 2 * * * systemctl restart discordbot
into root's crontab and you should be good to go.
You probably should also write the logs into a logfile, maybe /var/log/mydiscordbot.log, but how and if you do that is up to you.
OLD/WRONG ANSWER:
cron is run with a minimal environment, and depending on your os, /usr/bin/ is probably not in the $PATH var. screen is mostlikely located at /usr/bin/screen, so cron can't run it because it can't find the screen binary. try replacing screen in your script with /usr/bin/screen
But the other question here is: Why does your discord bot need to be restarted every day. I run multiple bots with 100k+ users, and they don't need to be restarted at all. Maybe you should open a new question with the error and some code snippets.
I don't know what os your rpi is running, but I had a similar issue earlier today trying to get vms on a server to launch a terminal and run a python script with crontab. I found a very easily solution to automatically restart it on crashes, and to run something simply in the background. Don't rely on crontab or an init service. If your rpi as an x server running, or anything that can be launched on session start, there is a simple shell solution. Make a bash script like
while :; do
/my/script/to/keep/running/here -foo -bar -baz >> /var/log/myscriptlog 2>&1
done
and then you would start it on .xprofile or some similar startup operation like
/path/to/launcher.sh &
to have it run the background. It will restart automatically everytime it closes if started in something like .xprofile, .xinitrc, or anything ran at startup.
Then maybe make a cronjob to restart the raspberry pi or whatever system is running the script but this script wil restart the service whenever it's closed anyway. Maybe you can put that cronjob on a script but I don't think it would launch the GUI.
Some other things you can do to launch a GUI terminal in my research with cronjob that you can try, though they didn't work for my situation on these custom linux vms, and that I read was a security risk to do this from a cronjob, is you can specify the display.
* * * * * DISPLAY=:0 /your/gui/stuff/here
but you would would need to make sure the user has the right permissions and it's considered an unsafe hack to even do this as far as I have read.
for my issue, I had to launch a terminal that stayed open, and then changed to the directory of a python script and ran the script, so that the local files in directory would be called in the python script. here is a rough example of the "launcher.sh" I called from the startup method this strange linux distro used lol.
#!/bin/sh
while :; do
/usr/bin/urxvt -e /bin/bash -c "cd /path/to/project && python loader.py"
done
Also check this source for process management
https://mywiki.wooledge.org/ProcessManagement

Jenkins : bash script ran with nohup is neither working nor writing anything to log

BACKGROUND
I would like to explain the scenario properly here.
I am running jenkins_2.73.3 in my cloud server with ubuntu 16.04.
Currently, there are 3 users in the server:
root
develop-user (which I had created for many reasons such as test,deploy etc)
jenkins (which was created by jenkins ofcourse, I also added this jenkins user to sudoers group)
PROBLEM
I have a bash script that I am calling from a build step in Jenkins. Within this bash script,there is a nohup command for calling a separate deployScript in the background such as:
#!/bin/bash
nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
After the build step is completed, I see that a id is generated inside save_pid.txt but app.log is surprisingly empty. I can't kill any processes with this generated pid. So, that means there isn't any process created in the first place here. Also, the deployScript.sh does not seem to have any effect at all. It's just not working. This happens everytime I run the build in Jenkins. I can assure that there is nothing wrong with the deployScript.sh.
I have tried running this bash script with the develop-user manually without Jenkins and it works perfectly. Contents are written to the log file and also I can use the generated pid to kill the process. I have also tested this in my local environment and it works.
QUESTION
I have been looking at this for days. What might be the root cause here ?Where can I look into to see some logs or other info ? How is the pid generated whereas the log file is empty ? Is it a permission issue with the jenkins user ? Please help.
You can use below line inside the execute shell in jenkins to run it in background without the process being killed.
BUILD_ID=dontKillMe <command> &
So, it turned out to be a permission issue and also the script wasn't executable I guess as pointed out in the comments above.
So, now the bash script looks like below:
#!/bin/bash
sudo chmod a+x deployScript.sh
sudo nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
This works.

Synology - Cron job

I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.

Shell script only starting applications when used through ssh

What can cause .sh scripts to work fine through an SSH shell, but not when executed through either PHP or crontab?
I have a VPS where I run game servers on, but in order to make it maintainable, I am planning on automating much of the tedious processes (like setting up or deleting the server) and making important features (like starting and stopping servers) easily acceptable for the ones who actually need it.
Now, when I made the shell scripts and tested them, they worked absolutely fine. startserver started the server, restartserver restarted it, etc. But when run from PHP, or - as I later figured out - crontab, starting servers magically does not work. Stopping them, checking if they are running, updating and all other features worked like intended, but starting a server just did not do anything. It just returned 0 while printing nothing.
For example, here is an example of a script which works in either case: (statusserver.sh)
/sbin/start-stop-daemon -v -t --start --exec ~mta/servers/$1/files/mta-server -- -d
And here is one which does not work in any case: (startserver.sh)
/sbin/start-stop-daemon -v --start --exec ~mta/servers/$1/files/mta-server -- -d
The only difference is that statusserver.sh has "-t", which will only tell you if doing the same command without -t will actually be successful. And executing statusserver.sh like so:
sudo -u mta ~mta/sh/statusserver.sh test
Indeed does work, printing something along the lines of "Would start ~mta/servers/test/files/mta-server -d". But doing this:
sudo -u mta ~mta/sh/startserver.sh $2
Does absolutely nothing. It does not print anything, and it actually returns 0. (which is supposed to mean the operation was successful)
Now for the fun part: When the server is already running, startserver.sh will do what it is supposed to do: Say that the server is already running, and returning an error code. (Because start-stop-daemon is kind enough to do that for me) But it flat out refuses to launch anything.
Replacing start-stop-daemon with something like:
sudo -u mta ~mta/servers/test/files/mta-server -d
Does exactly the same thing: It will just refuse to run, while still returning 0.
Oh by the way, it's not a sudo problem. Of that I am quite sure, since the following works fine too
sudo -u web1 sudo -u mta ~mta/scripts/startserver.sh test
So back to my question: What can cause Linux, Shell, Bash or whatever to flat out refuse to start an application when run through either PHP or crontab, while happily accepting it when launched through SSH? Is there any setting I need to switch? Any package that can be blocking up what I want to do? Any other thing I am just missing?
Look into using sudo.
Set up /etc/sudoer (using visudo) for the user that Apache runs as (usually for the 'nobody' user, or 'apache' user) as this is what Apache usually runs as. Grant sudo access to the commands you want to run, with the NOPASSWD option.
In your PHP script, use exec() to execute the commands to start/stop daemons and prefix the commands with the sudo command.
Here is an article about sudo:
http://www.cyberciti.biz/tips/allow-a-normal-user-to-run-commands-as-root.html
As I think Justin was touching on, but didn't say specifically, it would seem the problem of not being able to run the script is that the apache user account (which is generally pretty limited on purpose) can't see into the user's home directory because of the permissions. Generally only the user and root can see into their own home directory. You can do a few things, sudo to run the script in the home directory, move it out of the user's home directory or possibly change permissions on the scripts/homes so they can be run in the user's home directory by apache.

Run script with rc.local: script works, but not at boot

I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.

Resources