How to logon as non-root user in Kubernetes pod/container - security

I am trying to log into a kubernetes pod using the kubectl exec command. I am successful but it logs me in as the root user. I have created some other users too as part of the system build.
Command being used is "kubectl exec -it /bin/bash". I guess this means that run /bin/bash on the pod which results into a shell entry into the container.
Can someone please guide me on the following -
How to logon using a non-root user?
Is there a way to disable root user login?
How can I bind our organization's ldap into the container?
Please let me know if more information is needed from my end to answer this?
Thanks,
Anurag

You can use su - <USERNAME> to login as a non-root user.
Run cat /etc/passwd to get a list of all available users then identify a user with a valid shell compiler e.g
/bin/bash or /bin/sh
Users with /bin/nologin and /bin/false as the set compiler are used by system processes and as such you can't log in as them.

I think its because the container user is root, that is why when you kubectl exec into it, the default user is root. If you run your container or pod with non root then kubectl exec will not be root.

In most cases, there is only one process that runs in a Docker container inside a Kubernetes Pod. There are no other processes that can provide authentication or authorization features. You can try to run a wrapper with several nested processes in one container, but this way you spoil the containerization idea to run an immutable application code with minimum overhead.
kubectl exec runs another process in the same container environment with the main process, and there is no option to set the user ID for this process.
However, you can do it by using docker exec with the additional option:
--user , -u Username or UID (format: <name|uid>[:<group|gid>])
In any case, these two articles might be helpful for you to run IBM MQ in Kubernetes cluster
Availability and scalability of IBM MQ in containers
Administering Kubernetes

Related

Kubernetes Pod: Failed to get D-Bus Connection

I have a docker containter based on centos/systemd. I run the container with
docker run -d --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image>
Then i can access the container with:
docker exec -ti <containerID> /bin/bash
Then i can list all loaded units with the command systemctl . This works fine.
Now i want to deploy the image into a kubernetes cluster, this works also fine and i can access the running pod in the cluster via kubectl exec -ti <pod> /bin/bash
If i type now the command systemctl i get the error message
Failed to get D-Bus connection: Operation not permitted
How is it possible to make systemd/systemctl available in the pod?
HINT: Need systemd because of software running inside container, so supervisord is not an option here
It is a sad observation that the old proposal from Daniel Walsh (Redhat) is still floating around - which includes a hint to run a "privileged container" to get some systemd behaviour, by basically talking to the daemon outside of the container.
Drop that. Just forget it. You can't get that in a real cluster unless violating its basic designs.
And in most cases, the requirement for systemd in a container is not very strict when looking closer. There are quite a number of service-manager or an init-daemon implmentations for containers. You could try with the docker-systemctl-replacement script for example.
The command to start systemd would have to be in a script in the container. I use /usr/sbin/init or /usr/lib/systemd/systemd --systemd --unit=basic.target. Additionally you need start systemd with the tmpfs for /run to store runtime information. Scripting it is not easy and Tableau is a good example of why it's being done.
Also, I recommend to NOT use --privileged at all costs, because it's a security risk plus you may accidentally alter or bring down the host with changes made inside the container.

How to Find The User Who Stopped Docker Container

I want to know what is the user who stopped a docker container.
There are several user accounts on my server. I suspect that one of them sometimes stops the container.
How can I find the user that performed this operation?
You can use su -c history username to check command history of a user, I don't know how many users you have but you could loop through them and grep for commands taking docker containers down.
You can install GNU Accounting Utilities, to be able to see commands executed by users:
#centos
yum install psacct
# ubuntu:
apt-get install acct
#Also make sure that the cooresponding service is enabled:
/etc/init.d/psacct status
Then, after you realize that the container is stopped execute:
lastcomm --command docker
# or
lastcomm --command kill
to see which executed the above command(s).
You can use the above in combination with:
docker container logs <name-of-the-container>
to see what is the exact time on which the container was stopped. (E.g. you may see a message on the logs: "stopping service..") and match it with lastcomm output.
Other useful commands that come with the above package:sa, ac

default user not added to docker group, have to do su $USER?

I have Ubuntu 18.04. and after installing docker i added my user to docker group with the command
sudo usermod -aG docker ${USER}
and logged in
su - ${USER}
and if I check id, my user is added to docker group.
But when I reopen the terminal i cant do docker commands without sudo unless i explicitly do su ${USER}
also, I can't find docker group with the default user.
What am I missing here?
#larsks already replied to the main question in a comment, however I would like to elaborate on the implications of that change (adding your default user to the docker group).
Basically, the Docker daemon socket is owned by root:docker, so in order to use the Docker CLI commands, you need either to be in the docker group, or to prepend all docker commands by sudo.
As indicated in the documentation of Docker, it is risky to follow the first solution on your personal workstation, because this just amounts to providing the default user with root permissions without sudo-like password prompt protection. Indeed, users in the docker group are de facto root on the host. See for example this article and that one.
Instead, you may want to follow the second solution, which can be somewhat simplified by adding to your ~/.bashrc file an alias such as:
alias docker="sudo /usr/bin/docker"
Thus, docker run --rm -it debian will be automatically expanded to sudo /usr/bin/docker run --rm -it debian, thereby preserving sudo’s protection for your default user.

Managing directory permissions across the host and Docker container environments

I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).

Why start-stop-daemon needs privileges?

I am writing a Daemon and I want to use start-stop-daemon command to do it but, when I use it in the command line I get :
The command could not be located because '/sbin' is not included in the PATH environment variable.
This is most likely caused by the lack of administrative privileges associated with your user account.
start-stop-daemon: command not found
but when i use it with sudo it run perfect but i need it to run in daemon and i think it is not good to use sudo in bash script in daemon something like :
sudo start-stop-daemon --start --background ...
Isn't it? When I deleted sudo from it it gave me command not found. How can i fix it? if it is wrong to use sudo in daemon.
start-stop-daemon can also set the user ID for the daemon process.
That said, you'd generally use start-stop-daemon from a script in /etc/rc.d, which is run with root privileges either from the init system that is being used this week (sysvinit, upstart, systemd, ...) and/or from the service(8) command.
So, if a user should be able to start/stop the service (which is a rather uncommon scenario), you'd use the sudoers file to grant them access to the service command, with the name of your service as a mandatory first argument.
In general though, write your service so it can be simply started at boot or during installation, and used by users as long as it's running. If the user needs to be able to start and stop instances of the service, then your daemon is in the business of managing instances, and the instance manager should be continually running, and users then contact this service via a socket (so users don't need sudo at all, which would make the lives of many administrators who don't install sudo quite a bit easier).
That depends on your settings in '/etc/sudoers'.
If the environment is reset (default),
the following path definition 'secure_path' contains /sbin (excerpt from Ubuntu '/etc/sudoers'):
Defaults env_reset
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Otherwise you need to provide the full program path
/sbin/start-stop-daemon

Resources