I've heard so much about individual processes should run as its own unprivileged user, yet the crond process is always run by root. My question is, should it? If so, is it considered "good practices" to have the crond process run by root, but individual cron jobs to always use unprivileged user then?
Related
Docker best practices state:
If a service can run without privileges, use USER to change to a non-root user.
In the case of cron, that doesn't seem practical as cron needs root privileges to function properly. The executable that cron runs, however, does NOT need root privileges. Therefore, I run cron itself as the root user, but call my crontab script to run the executable (in this case, a simple Python FTP download script I wrote) as a non-root user via the crontab -u <user> command.
The cron/Docker interactability and community experience still seems to be in its infancy, but there are some pretty good solutions out there. Utilizing lessons gleaned from this and this great posts, I arrived at a Dockerfile that looks something like this:
FROM python:3.7.4-alpine
RUN adduser -S riptusk331
WORKDIR /home/riptusk331
... boilerplate not necessary to post here ...
COPY mycron /etc/cron.d/mycron
RUN chmod 644 /etc/cron.d/mycron
RUN crontab -u riptusk331 /etc/cron.d/mycron
CMD ["crond", "-f", "-l", "0"]
and the mycron file is just a simple python execution running every minute
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py
This works perfectly fine, but I am unsure of how exactly logging is being handled here. I do not see anything saved in /var/log/cron. I can see the output of cron and ftp.py on my terminal, as well as in the container logs if I pull it up in Kitematic. But I have no idea what is actually going on here.
So my first question(s) are: how is logging & output being handled here (without any redirects after the cron job), and is this implementation method ok & secure?
VonC's answer to this post suggests appending > /proc/1/fd/1 2>/proc/1/fd/2 to your cron job to redirect output to Docker's stdout and stderr. This is where I both get a little confused, and run into trouble.
My crontab file now looks like this
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py > /proc/1/fd/1 2>/proc/1/fd/2
The output without any redirection appeared to be going to stdout/stderr already, but I am not entirely sure. I just know it was showing up on my terminal. So why would this redirect be needed?
When I add this redirect, I run into permissioning issues. Recall that this crontab is being called as the non-root user riptusk331. Because of this, I don't have root access and get the following error:
/bin/ash: can't create /proc/1/fd/1: Permission denied
The Alpine base images are based on a compact tool set called BusyBox and when you run crond here you're getting the BusyBox cron and not any other implementation. Its documentation is a little sparse, but if you look at the crond source (in C) what you'll find is that there is not any redirection at all when it goes to run a job (see the non-sendmail version of start_one_job); the job's stdout and stderr are crond's stdout and stderr. In Docker, since crond is the container primary process, that in turn becomes the container's output stream.
Anything that shows up in docker logs definitionally went to stdout or stderr or the container's main process. If this cron implementation wrote your job's output directly there, there's nothing wrong or insecure with taking advantage of that.
In heavier-weight container orchestration systems, there is some way to run a container on a schedule (Kubernetes CronJobs, Nomad periodic jobs). You might find it easier and more consistent with these systems to set up a container that runs your job once and then exits, and then to set up the host's cron to run your container (necessarily, as root).
You need to allow the CAP_SETGID to run crond as user, this can be a security risk if it is set to all busybox binary but you can use dcron package instead of busybox's builtin crond and set the CAP_SETGID just on that program. Here is what you need to add for Alpine, using riptusk331 as running user
USER root
# crond needs root, so install dcron and cap package and set the capabilities
# on dcron binary https://github.com/inter169/systs/blob/master/alpine/crond/README.md
RUN apk add --no-cache dcron libcap && \
chown riptusk331:riptusk331 /usr/sbin/crond && \
setcap cap_setgid=ep /usr/sbin/crond
USER riptusk331
I'm fiddling with libfuse and I find useful the rules make mount which executes the userspace fuse daemon and make umount to unmount the directory. Unfortunately if I start the daemon in the make mount rule, this gets killed as soon as make exits (when the rule is completed).
Is it possible to spawn a daemon from a make rule such that the daemon persists the exit of make?
Make is the wrong tool for the job here. It shouldn't be used as a supervisor for other processes and anything it starts should end when it does.
That being said you can easily unhitch processes so that kill signals are not propagated when processes terminate. Running your fuse daemon prefixed by nohup … should stop the signals from reaching the child process and it will go on it's merry way.
Is it possible to fork a process and to run a program as normal user, e.g. with sudo rights? Or, if with sudo, with normal rights?
If your process runs as root, after a fork() you can execute setgid() and setuid(), and run as a normal user in the child process, without affecting the parent process, that continues to run as root.
Hi everyone,
I've got a few scripts running with crontab and I know they are actually running thanks to a log file.
The thing is, each time I type ps -ef | grep .sh (because my scripts are .sh files) i have no results.
I read that crontab was using its own environment to execute his scripts and so I was wondering if ps command was able to detect them.
I'm a newbie to Linux environment, so I'm sorry if my question might seem obvious. Thanks
If you run ps while your script is running, then ps will report that process.
crond is the cron process, and it belongs to root. When crond notices that it's time for your process to run, it will fork a process, change that process's user to your ID, then exec() your script.
This process will appear in ps, if ps is run while it's active, but if the process is short-lived, you only have a short window of opportunity to glimpse it..
I have an embedded device running busybox. The device has crond installed and running, but has no atd daemon. I need to schedule task to run at a given time (just once, not periodically). I know, that the "kosher" way is to use at command, but I unfortunately don't have one. So, how can I use cron as a workaround?
You can set up the cron to run your script, and when it succeeds, the script should just comment out or remove the cron entry.