I'm having problem because i've installed & started docker as a "bad_user". The problem is that this container generates static files (its jekyll/jekyll image), and those files are owned by "bad_user" so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would be painful to do every time, and it just bugs me :).
I have tried to reinstall docker & removing /etc/docker directory without any effect. Every time i reinstall it (docker service/manager) runs as "bad_user" and overwrites directory owner.
My question is:
Would that be possible to make docker running under "docker" user ? I have already created that user with that group (yes, i have reinstalled docker-ce under that user already).
Im working on debian-based distro.
I guess in my case its docker daemon issue, somehow when its syncrhonizing shared volume files it gives permission to bad_user instead of user who is running container.
PS. This is the command i run if that matters:
docker run --rm -p 8000:8000 \
--volume="/home/docker/blog:/srv/jekyll" \
-it tocttou/jekyll:3.5 \
jekyll serve --watch --port 8000
Okay i figured it out. It turns out that when you run linux container that creates some files on the shared volume (the -v argument makes shared volume), the file permissions will be for user with grup id = 1000 and id = 1000. In my case user with id=1000 was "bad_user". If you want to workaround that you can use --user and specify user id that you're running under.
The key is to remember that linux permissions are just numbers, for host filesystem number 1000 is (in my case) "bad_user" and 10001 is "docker_user". If you check permissions from inside of the container you'll might see that user id = 1000 means very different user than on your host system.
I hope that next people who will encounter this issue will find that userful.
You can find more information here: https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/
Related
I read this article about why you shouldn't run containerized applications as root user,
and I'd like someone to confirm my understanding:
Article brief
The article is basically saying that just as you won't run binaries on your machine
as root, but rather as a least-privileged-required user, you won't run containerized applications as root either.
The recommendation of the author is to create a user with a known uid in the Dockerfile
and run the process as that user.
The start of the dockerfile should look like
this:
FROM <base image>
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
<br>USER appuser
... <rest of Dockerfile> ..
Validating my understanding, and some questions:
1. Why bother?
Ok, I understand that it's not good to run a container process as root,
just like it's not good to run any process as root.
That's why we should create a user in the Dockerfile and run the application process as that user.
But, if it's possible to run:
~$ docker run -u 0 some_docker_image
then why bother adding a user to my Dockerfile and switch to that user?
The question boils down to the question: What are we "afraid" of? What is the threat?
If the answer is that we're afraid of some untrusted user connected to the system (who is not a sudoer),
then this user can't even run docker containers, unless he is a member of the "docker" group,
in which case - again - he could run the container with -u 0?
So I guess that we're not concerned about the user. We're concerned about the binary itself.
In that case, two possible options exist:
a. The binary is of our creation. In that case: why are we concerned?
b. The binary is of someone else's creation. In that case i can understand why we would like to
switch user.
Am I missing something?
2. Why "Known uid"?
Why is that important to specify the uid of the newly created user, and not just name it something?
3. Why in the start of the Dockerfile?
Is it important to create+switch to the new user in the beginning of the Dockerfile?
Seemingly, this is an approach that's hard to implement, since usually during the docker build process
you need to run a lot of tasks that require root privileges, such as apt-get install etc.
4. What about adding a user and adding it to sudoers?
I have a case in which I need to create a docker image, which when the container runs,
it'll run a ssh server. In order to run the ssh server, you need root privileges.
Is there a point creating a user, adding it to sudoers, and then run the ssh server as root?
Running as root in Docker is dangerous for most of the same reasons as running as root directly on the host. The container has limited Linux privileges so there are some things it can't do (reconfigure the network, reboot the host), but it can do things like overwrite the application code inside the container.
Nobody's code is absolutely perfect, so one of the big reasons to run as non-root is to minimize the damage possible when a mistake does happen.
It doesn't matter what the user ID is, just that it's not 0. There's an argument to make it different from any uid the host might be using, but since your image could run on any host, it's just a guess.
You should create the user at the start of the Dockerfile, since that setup will change infrequently and Docker layer caching can skip it. But, you should use the USER directive and switch to the user at the end of the Dockerfile, after COPYing code in and RUNning the build. Do not RUN chown ... to make the non-root user own the code: you want most files to be owned by root, so that the non-root user can't overwrite them.
(In a compiled language, with a multi-stage build, you can consider the Dockerfile equivalent of the ./configure; make; sudo make install sequence, switching to a non-privileged user to do the build. I haven't seen this pattern in many Dockerfiles but I'd recognize it if I saw it.)
Do not add a user to /etc/sudoers. There are a couple of good reasons for this. The most basic one is that a container only runs a single process, and as already discussed we probably don't want it to be root. You can either configure it with no password (in which case you might as well be root) or hard-code a password in plain text in your Dockerfile (also a bad idea). You also usually don't want to run sudo inside a script (its behaviors of putting up random password prompts and hiding environment variables can cause trouble), and so correspondingly don't want to RUN sudo ... in a Dockerfile.
If you need to break into a container to debug it, you can always docker exec -u root ... to get a root shell there.
What happens when a RUN useradd ... command is found in a Dockerfile?
Where is that user created? Where does it live? Is it added to the UID pool managed by the kernel?
What happens to it when the build context container exits?
A "user" in Linux is a numeric user ID, plus a mapping to some specific user name and other properties in /etc/passwd (optional). There's no such thing as a "uid pool"; if I happen to have user ID 12345 right now then things like filesystem ownership checks do a numeric comparison, and that's kind of the end of it.
So if a Dockerfile does a RUN useradd, it runs the useradd command, which (among other things) adds an entry to /etc/passwd inside the image's filesystem space. Docker knows to look things up in the image's /etc/passwd file for a Dockerfile USER directive or the docker run -u option. (But you can also use a numeric uid there too.)
Depending on the host OS and Docker daemon configuration, if you map a host directory into a container with the docker run -v option, either the container process must run as root or with the same numeric user ID as the directory owner to be able to access its files. The name of the respective host and container users is irrelevant.
The usual best practice I've seen is to do all of the installation as root, then create a single non-root user (its user ID is irrelevant, so long as it's not 0) and make that be the default user when executing the container.
FROM ubuntu
# Defaults to root; root will own all installed files
RUN ... && make install
# Set up a non-root user
RUN useradd myapp
USER myapp
# Say how to run the container (as user "myapp")
CMD ["myapp"]
I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.
I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:
I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.