Switching users inside Docker image to a non-root user - linux

I'm trying to switch user to the tomcat7 user in order to setup SSH certificates.
When I do su tomcat7, nothing happens.
whoami still ruturns root after doing su tomcat7
Doing a more /etc/passwd, I get the following result which clearly shows that a tomcat7 user exists:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
backup:x:34:34:backup:/var/backups:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
libuuid:x:100:101::/var/lib/libuuid:/bin/sh
messagebus:x:101:104::/var/run/dbus:/bin/false
colord:x:102:105:colord colour management daemon,,,:/var/lib/colord:/bin/false
saned:x:103:106::/home/saned:/bin/false
tomcat7:x:104:107::/usr/share/tomcat7:/bin/false
What I'm trying to work around is this error in Hudson:
Command "git fetch -t git#________.co.za:_______/_____________.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: Host key verification failed.
This is my Dockerfile, it takes an existing hudson war file and config that is tarred and builds an image, hudson runs fine, it just can't access git due to certificates not existing for user tomcat7.
FROM debian:wheezy
# install java on image
RUN apt-get update
RUN apt-get install -y openjdk-7-jdk tomcat7
# install hudson on image
RUN rm -rf /var/lib/tomcat7/webapps/*
ADD ./ROOT.tar.gz /var/lib/tomcat7/webapps/
# copy hudson config over to image
RUN mkdir /usr/share/tomcat7/.hudson
ADD ./dothudson.tar.gz /usr/share/tomcat7/
RUN chown -R tomcat7:tomcat7 /usr/share/tomcat7/
# add ssh certificates
RUN mkdir /root/.ssh
ADD ssh.tar.gz /root/
# install some dependencies
RUN apt-get update
RUN apt-get install --y maven
RUN apt-get install --y git
RUN apt-get install --y subversion
# background script
ADD run.sh /root/run.sh
RUN chmod +x /root/run.sh
# expose port 8080
EXPOSE 8080
CMD ["/root/run.sh"]
I'm using the latest version of Docker (Docker version 1.0.0, build 63fe64c/1.0.0), is this a bug in Docker or am I missing something in my Dockerfile?

You should not use su in a dockerfile, however you should use the USER instruction in the Dockerfile.
At each stage of the Dockerfile build, a new container is created so any change you make to the user will not persist on the next build stage.
For example:
RUN whoami
RUN su test
RUN whoami
This would never say the user would be test as a new container is spawned on the 2nd whoami. The output would be root on both (unless of course you run USER beforehand).
If however you do:
RUN whoami
USER test
RUN whoami
You should see root then test.
Alternatively you can run a command as a different user with sudo with something like
sudo -u test whoami
But it seems better to use the official supported instruction.

As a different approach to the other answer, instead of indicating the user upon image creation on the Dockerfile, you can do so via command-line on a particular container as a per-command basis.
With docker exec, use --user to specify which user account the interactive terminal will use (the container should be running and the user has to exist in the containerized system):
docker exec -it --user [username] [container] bash
See https://docs.docker.com/engine/reference/commandline/exec/

In case you need to perform privileged tasks like changing permissions of folders you can perform those tasks as a root user and then create a non-privileged user and switch to it.
FROM <some-base-image:tag>
# Switch to root user
USER root # <--- Usually you won't be needed it - Depends on base image
# Run privileged command
RUN apt install <packages>
RUN apt <privileged command>
# Set user and group
ARG user=appuser
ARG group=appuser
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group}
RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} # <--- the '-m' create a user home directory
# Switch to user
USER ${uid}:${gid}
# Run non-privileged command
RUN apt <non-privileged command>

Add this line to docker file
USER <your_user_name>
Use docker instruction USER

You should also be able to do:
apt install sudo
sudo -i -u tomcat
Then you should be the tomcat user. It's not clear which Linux distribution you're using, but this works with Ubuntu 18.04 LTS, for example.

There's no real way to do this. As a result, things like mysqld_safe fail, and you can't install mysql-server in a Debian docker container without jumping through 40 hoops because.. well... it aborts if it's not root.
You can use USER, but you won't be able to apt-get install if you're not root.

Related

Nextcloud docker install with SSH access enabled

I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).

Run sshd in Docker container

I found this Dockerfile sample here:
// version 1
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
CMD ["/usr/sbin/sshd","-D"]
When I build and run this Dockerfile, it runs an SSH server in the foreground, which is great.
If I use the following Dockerfile though:
// version 2
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
# CMD ["/usr/sbin/sshd","-D"] // without this line
And then run the container:
~$ docker run -p 2222:22 -it ssh_server
And try to connect to it from another terminal, it doesn't work. Seemingly this call to sshd is necessary. On the other hand, If I just install SSH in the Dockerfile:
// version 3
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ssh
And run the container like this:
~$ docker run -p 2222:22 -it ssh:test
~$ service ssh start
* Starting OpenBSD Secure Shell server sshd
Now I'm able to connect to the container. So I wonder: If the line RUN ssh service start
in version 1 is necessary, why isn't necessary for version 3?
To add more to the confusion, if I build and run version 4:
// version 4
FROM ubuntu:latest
RUN apt update && apt install ssh -y
#RUN service ssh start // without this line
CMD ["/usr/sbin/sshd","-D"]
It also doesn't work either.
Can someone please explain those behaviours? What is the relation between service ssh start and /usr/sbin/sshd?
OK everything is clear now:
Basically running the /usr/sbin/sshd is what runs the ssh server. The reason it didn't work out on it's own (version 4) is because the script that runs when you run service ssh start - which is the script /etc/init.d/ssh - creates a directory /run/sshd which is required for the run of sshd.
This script also calls the executable /usr/sbin/sshd, but since this is run as part of the build, it didn't sustain beyond the temporary container that the layer was made of. W
What did sustain is the /run/sshd directory! That's why when we run /usr/sbin/sshd as the CMD it works!
Thanks all!
To build on #YoavKlein's answer, service ssh start can take arguments which are passed to sshd, so rather than
# Incidentally creates /run/sshd
RUN service ssh start
# Run the service in the foreground when starting the container
CMD ["/usr/sbin/sshd", "-D"]
you can just do
# Run the service in the foreground when starting the container
CMD ["service", "ssh", "start", "-D"]
which will start the SSH server through service, but run it in the foreground, avoiding having to have a separate RUN to do first time setup.
I have taken the idea from #mark-raymond :)
Following docker run command with the -D flag worked for me!:
docker run -itd -p 2222:22 <dockerImageName:Tag> /usr/sbin/sshd -D

sudo: command not found when I ssh into server

I am a newbie with server handling and Linux. I am trying to install composer on my server so that i can host my Laravel project onto it as mentioned in the tutorial in Ultimate Guide: Deploy Laravel 5.3 App on LEMP Stack. I ssh into the server and after installation of composer when I run sudo mv composer.phar /usr/local/bin/composer I am getting a message in the terminal:
-bash: sudo: command not found
I desperately need some deliberate help
Sudo is probably not installed or not in your path
check to see if you are root in this case sudo is not needed unless you are trying to impersonate another user. just run your command without sudo mv composer.phar /usr/local/bin/composer
See if sudo is your path by running which sudo or echo $PATH. If sudo is not in your path, your path variable might be broken. You can try testing this by executing a common location for sudo /usr/bin/sudo or running locate sudo | grep bin to attempt to find its location.
If you know that sudo was installed, or your path looks broken, try fixing your path. Check your distribution's env file (/etc/environment in ubuntu) to make sure that it is formatted correctly (script commands are illegal in this file)
If you are not root and you want to run a command with root prvileges then you must install sudo. But if you don't have sudo and you are not root then you can't install it. In this case I recommend switching to the root user with su
If you do not have the root password and you own the machine, you can reset the root password with a tutorial such as https://askubuntu.com/questions/24006/how-do-i-reset-a-lost-administrative-password
After you manage to login as root install sudo with apt-get update; apt-get install sudosince you are using Ubuntu.
Verify the the name of your sudoers group with visudo and modify your sudoers file if you need to. https://www.digitalocean.com/community/tutorials/how-to-edit-the-sudoers-file-on-ubuntu-and-centos
if you have an existing sudoers group or you create one you can add yourself to the group. For example if your sudoers group is called sudo run usermod -aG sudo myuser. The sudoers group by default in Ubuntu based Linux is sudo. A sudoers group entry looks like this: %sudo ALL=(ALL:ALL) ALL
If you are trying to impersonate another user and cannot install sudo, you can still use su if it is installed and you have permission / password for the other user.
e.g. su someuser
As suggested in this post, you may have to install sudo in your server.
To do that, log in as root with the following command: su -. Then install sudo with your package manager (if you're in Ubuntu: apt-get install sudo).
Then add your user to the sudo group: usermod -aG sudo <username>.
Finally type exit to log out of the root account and go back to your user.
try to install your sudo using by first logging in as a root(su - ) and then try to install **apt-get or yum sudo **. Make sure your path variable is set so that you would be able to get binary.
which sudo
echo $PATH

Docker Busybox container add groups and user

I need users in my docker containers. My build is from the busybox image which is missing groupadd, I tried to add it using apt-get but that's also missing. What do I need to add to my Dockerfile to get groupadd?
So far I have
FROM busybox
RUN apt-get install bash
RUN groupadd -r postgres && useradd -r -g postgres postgres
CMD /bin/sh
You're trying to run Debian based command on a non-Debian system. If you need apt-get and other tools like that, you should change your base image with a FROM debian.
Busybox does include the addgroup with the following syntax:
/ # addgroup --help
BusyBox v1.24.2 (2016-03-18 16:38:06 UTC) multi-call binary.
Usage: addgroup [-g GID] [-S] [USER] GROUP
Add a group or add a user to a group
-g GID Group id
-S Create a system group

how to set supervisor to run a shell script

Setting up a Dockerfile to install node prereqs and then set up supervisor in order to run the final npm install command. Running Docker in CoreOS under VirtualBox.
I have a Dockerfile that sets everything up correctly:
FROM ubuntu
MAINTAINER <<Me>>
# Install docker basics
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
# Install dependencies and nodejs
RUN apt-get update
RUN apt-get install -y python-software-properties python g++ make
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install -y nodejs
# Install git
RUN apt-get install -y git
# Install supervisor
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
# Add supervisor config file
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Bundle app source
ADD . /src
# create supervisord user
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN chown -R nonroot: /src
# set install script to executable
RUN /bin/chmod +x /src/etc/install.sh
#set up .env file
RUN echo "NODE_ENV=development\nPORT=5000\nRIAK_SERVERS={SERVER}" > /src/.env
#expose the correct port
EXPOSE 5000
# start supervisord when container launches
CMD ["/usr/bin/supervisord"]
And then I want to set up supervisord to launch one of a few possible processes, including an installation shell script that I've confirmed to work correctly, install.sh, which is located in the application's /etc directory:
#!/bin/bash
cd /src; npm install
export PATH=$PATH:node_modules/.bin
However, I'm very new to supervisor syntax, and I can't get it to launch the shell script correctly. This is what I have in my supervisord.conf file:
[supervisord]
nodaemon=true
[program:install]
command=install.sh
directory=/src/etc/
user=nonroot
When I run the Dockerfile, everything runs correctly, but when I launch the image, I get the following:
2014-03-15 07:39:56,854 CRIT Supervisor running as root (no user in config file)
2014-03-15 07:39:56,856 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2014-03-15 07:39:56,913 INFO RPC interface 'supervisor' initialized
2014-03-15 07:39:56,913 WARN cElementTree not installed, using slower XML parser for XML-RPC
2014-03-15 07:39:56,914 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-03-15 07:39:56,915 INFO supervisord started with pid 1
2014-03-15 07:39:57,918 INFO spawnerr: can't find command 'install.sh'
2014-03-15 07:39:58,920 INFO spawnerr: can't find command 'install.sh'
Clearly, I have not set up supervisor correctly to run this shell script -- is there part of the syntax that I'm screwing up?
The best way that I found was setting this:
[program:my-program-name]
command = /path/to/my/command.sh
startsecs = 0
autorestart = false
startretries = 1
think I got this sorted: needed the full path in command, and instead of having user=nonroot in the .conf file, I put su nonroot into the install.sh script.
I had a quick look in the source code for supervisor and noticed that if the command does not contain a forward slash /, it will look in the PATH environmental variable for that file. This imitates the behaviour of execution via shell.
The following methods should fix your initial problem:
Specify the full path of the script (like you have done in your own answer)
Prefix the command with ./, i.e. ./install.sh (in theory, but untested)
Prefix the command with the shell executable, i.e. /bin/bash install.sh
I do not understand why user= does not work for you (have you tried it after fixing execution?), but the problem you encountered in your own answer was probably due to the incorrect usage of su which does not work like sudo. su will create its own interactive shell and will therefore hang while waiting for standard input. To run commands with su, use the -c flag, i.e. su -c "some-program" nonroot. An explicit shell can also be specified with the -s flag if necessary.
I had this issue too. For me, the root cause was failing to set the shebang line. Even if the script can run in bash fine, for supervisord to be able to exec() it, it has to begin with e.g. #!/bin/bash.

Resources