I want to set the default acl for some folders when building a docker image using setfacl but it has no effect. The default acl is unchanged. My aim is that every file that is created in /opt must have rwX permissions for any user, as the image will be run with an arbitrary uid later and needs full access to /opt.
Here's a quick example Dockerfile
FROM ubuntu:bionic
SHELL ["/bin/bash", "-c"]
RUN apt-get update > /dev/null && apt-get install -y --no-install-recommends acl > /dev/null
RUN chmod -R a+rwXs /opt
RUN setfacl -d -m o::rwx /opt
RUN getfacl /opt
and the output is
# file: opt
# owner: root
# group: root
# flags: ss-
user::rwx
group::rwx
other::rwx
which is wrong, the default acl is missing. But if I run the commands in the container manually it works
docker run -ti --rm ubuntu:bionic bash
root#636bf8fdba41:/# apt-get update > /dev/null && apt-get install -y --no-install-recommends acl > /dev/null
debconf: delaying package configuration, since apt-utils is not installed
root#636bf8fdba41:/# chmod -R a+rwXs /opt
root#636bf8fdba41:/# setfacl -d -m o::rwx /opt
root#636bf8fdba41:/# getfacl /opt
getfacl: Removing leading '/' from absolute path names
# file: opt
# owner: root
# group: root
# flags: ss-
user::rwx
group::rwx
other::rwx
default:user::rwx
default:group::rwx
default:other::rwx
Any idea why docker does not correctly apply the acl changes when running setfacl in the Dockerfile?
Docker version 19.03.5, build 633a0ea838
Ubuntu 18.04 as host
Any idea why docker does not correctly apply the acl changes when running setfacl in the Dockerfile?
Don't take this as an authoritative answer, because I'm just guessing.
Docker images have to run on a variety of distributions, with different storage backends (possibly even more when you facter in image registries, like hub.docker.com). Even those that are filesystem based may be backed by different filesystems with different capabilities.
This means that in order for Docker images to run reliably and reproducibly in all situations, they have to minimize the number of extended filesystem features they preserve.
This is probably why the extended attributes necessary to implement filesystem ACLs are not preserved as part of the image.
It works in a container because at this point the files are stored on a specific local filesystem, so you can take advantage of any features supported by that filesystem.
Related
I am using smbnetfs within a Docker container (running on Ubuntu 22.04) to write files from my application to a mounted Windows Server share. Reading files from the share is working properly, but writing files via smbnetfs gives me a headache. My Haskell application crashes with an Input/output error while writing files to the mounted share. Just 0KB files without any content are written. Apart from the application I've the same problem if I try to write files from the containers bash terminal or from Ubuntu 22.04 directly. So I assume that the problem is not related to Haskell and/or Docker. Therefore let's focus on creating files via bash within a Docker container in this SO question here.
Within the container I've tried the following different possibilities to write files, some with success and some non-success:
This works:
Either touch <mount-dir>/file.txt => 0KB file is generated. Editing the file with nano works
properly.
Or echo "demo content" > <mount-dir>/file.txt works also.
(Hint: Consider the redirection operator)
Creating directories with mkdir -p <mount-dir>/path/to/file/ is also working without any problems.
These steps do not work:
touch <mount-dir>/file.txt => 0KB file is generated properly.
echo "demo-content" >> <mount-dir>/file.txt => Input/output error
(Hint: Consider the redirection operator)
Configuration
Following my configuration:
smbnetfs
smbnetfs.conf
...
show_$_shares "true"
...
include "smbnetfs.auth"
...
include "smbnetfs.host"
smbnetfs.auth
auth "<windows-server-fqdn>/<share>" "<domain>/<user>" "<password>"
smbnetfs.host
host <windows-server-fqdn> visible=true
Docker
Here the Docker configuration.
Docker run arguments:
...
--device=/dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
...
Dockerfile:
FROM debian:bullseye-20220711-slim#sha256:f52f9aebdd310d504e0995601346735bb14da077c5d014e9f14017dadc915fe5
ARG DEBIAN_FRONTEND=noninteractive
# Prerequisites
RUN apt-get update && \
apt-get install -y --no-install-recommends \
fuse=2.9.9-5 \
locales=2.31-13+deb11u3 \
locales-all=2.31-13+deb11u3 \
libcurl4=7.74.0-1.3+deb11u1 \
libnuma1=2.0.12-1+b1 \
smbnetfs=0.6.3-1 \
tzdata=2021a-1+deb11u4 \
jq=1.6-2.1 && \
rm -rf /var/lib/apt/lists/*
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Copy runtime artifacts
WORKDIR /app
COPY --from=build /home/vscode/.local/bin/Genesis-exe .
COPY entrypoint.sh .
## Prepare smbnetfs configuration files and create runtime user
ARG MOUNT_DIR=/home/moduleuser/mnt
ARG SMB_CONFIG_DIR=/home/moduleuser/.smb
RUN useradd -ms /bin/bash moduleuser && mkdir ${SMB_CONFIG_DIR}
# Set file permission so, that smbnetfs.auth and smbnetfs.host can be created later
RUN chmod -R 700 ${SMB_CONFIG_DIR} && chown -R moduleuser ${SMB_CONFIG_DIR}
# Copy smbnetfs.conf and restrict file permissions
COPY smbnetfs.conf ${SMB_CONFIG_DIR}/smbnetfs.conf
RUN chmod 600 ${SMB_CONFIG_DIR}/smbnetfs.conf && chown moduleuser ${SMB_CONFIG_DIR}/smbnetfs.conf
# Create module user and create mount directory
USER moduleuser
RUN mkdir ${MOUNT_DIR}
ENTRYPOINT ["./entrypoint.sh"]
Hint: The problem is not related to Docker, because I've the same problem within Ubuntu22.04.
Updates:
Update 1:
If I start smbnetfs in debug mode and run the command echo "demo-content" >> <mount-dir>/file.txt the following log is written:
open flags: 0x8401 /<windows-server-fqdn>/share/sub-dir/file.txt
2022-07-25 07:36:32.393 srv(26)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:34.806 srv(27)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:37.229 srv(28)->smb_conn_srv_open: errno=6, No such device or address
unique: 12, error: -5 (Input/output error), outsize: 16
Update 2:
If I use a Linux based smb-server, then I can write the files properly with the command echo "demo-content" >> <mount-dir>/file.txt
SMB-Server's Dockerfile
FROM alpine:3.7#sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b
RUN apk add --no-cache --update \
samba-common-tools=4.7.6-r3 \
samba-client=4.7.6-r3 \
samba-server=4.7.6-r3
RUN mkdir -p /Shared && \
chmod 777 /Shared
COPY ./conf/smb.conf /etc/samba/smb.conf
EXPOSE 445/tcp
CMD ["smbd", "--foreground", "--log-stdout", "--no-process-group"]
SMB-Server's smb.conf
[global]
map to guest = Bad User
log file = /var/log/samba/%m
log level = 2
[guest]
public = yes
path = /Shared/
read only = no
guest ok = yes
Update 3:
It also works:
if I create the file locally in the container and then move it to the <mount-dir>.
if I remove a file, that I created earlier (rm <mount-dir>/file.txt)
if I rename a file, that I created earlier.(mv <mount-dir>/file.txt <mount-dir>/fileMv.txt)
Update 4:
Found identical problem description here.
I have an ECS Fargate container running a nodejs application with non-root permissions and is also mounted to EFS on /.user_data inside the container.
I followed this AWS tutorial. My setup is almost similar.
Here is the Docker file:
FROM node:12-buster-slim
RUN apt-get update && \
apt-get install -y build-essential \
wget \
python3 \
make \
gcc \
libc6-dev \
git
# delete old user
RUN userdel -r node
# Run as a non-root user
RUN addgroup "new_user_group" && \
useradd "new_user" --gid "new_user_group" \
--home-dir "/home/new_user"
RUN git clone https://github.com/test-app.git /home/new_user/app
RUN chown -R new_user:new_user_group /home/new_user
RUN mkdir -p /home/new_user/.user_data
RUN chown -R new_user:new_user_group /home/new_user/.user_data
RUN chmod -R 755 /home/new_user/
WORKDIR /home/new_user/app
RUN npm install
RUN npm run build
EXPOSE 1880
USER new_user
CMD [ "npm", "start" ]
When the Node app tries to write inside /.user_data I am getting read-write permission denied error.
If I run the container as root the app is able to read/write data.
I tried adding an access point to EFS with UID and permissions but that didn't help as well.
Please note: The Dockerfile works fine on my local machine.
Update
Read this blog post - Developers guide to using Amazon EFS with Amazon ECS and AWS Fargate – Part 2 > POSIX permissions
Might be related to the IAM Policy that was assigned to the ECS Task's IAM Role.
"...if the AWS policies do not allow the ClientRootAccess action, your user is going to be squashed to a pre-defined UID:GID that is 65534:65534. From this point on, standard POSIX permissions apply: what this user can do is determined by the POSIX file system permissions. For example, a folder owned by any UID:GID other than 65534:65534 that has 666 (rw for owner and rw for everyone) will allow this reserved user to create a file. However, a folder owned by any UID:GID other than 65534:65534 that has 644 (rw for owner and r for everyone) will NOT allow this squashed user to create a file."
Make sure that your root-dir permissions are set to 777. This way any UID can read/write this dir.
To be less permissive, set the root-dir to 755, which is set by default, see the docs. This provides read-write-execute to the root user, read-execute to group and read-execute to all other users.
A user (UID) can't access (read) a sub-directory if there's no read access to its parents (directories).
You can test it easily with Docker, here's a quick example
Create a Dockerfile -
FROM ubuntu:20.04
# Fetch values from ARGs that were declared at the top of this file
ARG APP_NAME
ARG APP_ARTIFACT_DIR
ARG APP_HOME_DIR="/app"
ARG APP_USER_NAME="appuser"
ARG APP_GROUP_ID="appgroup"
# Define workdir
ENV HOME="${APP_HOME_DIR}"
WORKDIR "${HOME}"
RUN apt-get update -y && apt-get install tree
# Define env vars
ENV PATH="${HOME}/.local/bin:${PATH}"
# Run as a non-root user
RUN addgroup "${APP_GROUP_ID}" && \
useradd "${APP_USER_NAME}" --gid "${APP_GROUP_ID}" --home-dir "${HOME}" && \
chown -R ${APP_USER_NAME} .
RUN mkdir -p rootdir && \
mkdir -p rootdir/subdir && \
touch rootdir/root.file rootdir/subdir/sub.file && \
chown -R root:root rootdir && \
chmod 600 rootdir rootdir/root.file && \
chmod -R 775 rootdir/subdir
You should play with chmod 600 and chmod -R 775, try different permissions sets such as 777 and 644, and see if it makes sense.
Build an image, run a container, and test the permissions -
docker build boyfromnorth .
docker run --rm -it boyfromnorth bash
root#e0f043d9884c:~$ su appuser
$ ls -la
total 12
drwxr-xr-x 1 appuser root 4096 Jan 30 12:23 .
drwxr-xr-x 1 root root 4096 Jan 30 12:33 ..
drw------- 3 root root 4096 Jan 30 12:23 rootdir
$ ls rootdir
ls: cannot open directory 'rootdir': Permission denied
I recently managed to install
Centos7 with kickstart.cfg file by using virt-install at my Archlinux.
However, if I simply want to use similar approach with Centos 8 - it does not work at all.
I suspect that it is because of the fact that Centos8 does not have any minimal version and you need to download like 7GB iso fiel with graphical installer.
sudo virt-install --name k8s-1 \
--description "this is my Centos 8 " \
--ram 2048 \
--vcpus 2 \
--disk path=/vm-disks/k8s-1.qcow2,size=15 \
--os-type linux \
--os-variant "centos8" \
--network bridge=virbr0 \
--graphics vnc,listen=127.0.0.1,port=5901 \
--location /cdimages/CentOS-8.1.1911-x86_64-dvd1.iso \
--noautoconsole \
--initrd-inject ks-1.cfg --extra-args="ks=file:/ks-1.cfg"
Setting input-charset to 'UTF-8' from locale.
Starting install...
Setting input-charset to 'UTF-8' from locale.
Retrieving file vmlinuz... | 7.7 MB 00:00:00
Setting input-charset to 'UTF-8' from locale.
Retrieving file initrd.img... | 59 MB 00:00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
Here is ks-1.cfg
# Install OS instead of upgrade
install
# Use network installation
cdrom
# Root password
rootpw Start123
# System authorization information
auth --useshadow --passalgo=sha512
# Firewall configuration
firewall --disabled
# SELinux configuration
selinux --permissive
# Installation logging level
logging --level=info
# Use text mode install
text
# Do not configure the X Window System
skipx
# System timezone, language and keyboard
timezone --utc Europe/Bratislava
lang en_US.UTF-8
# keyboard dk-latin1
# Network information
# network --bootproto=static --ip=192.168.122.110 --device=eth0 --onboot=on
# If you want to configure a static IP:
network --device eth0 --hostname k8s-1 --bootproto=static --ip=192.168.122.111 --netmask=255.255.255.0 --gateway=192.168.122.1 --nameserver 192.168.122.1
# System bootloader configuration
bootloader --location=mbr
# Partition clearing information
clearpart --all --initlabel
# Disk partitioning information
part /boot --fstype="ext4" --size=512
#part swap --fstype="swap" --recommended
part /var --fstype="ext4" --size=5120 --grow
part / --fstype="ext4" --size=1024 --grow
part /usr --fstype="ext4" --size=3072
part /home --fstype="ext4" --size=512
part /tmp --fstype="ext4" --size=1024
# Reboot after installation
reboot
%packages --nobase
#core
# #base
%end
%post --log=/root/ks-post.log
#---- Install packages used by kubernetes
#yum install -y socat libseccomp-devel btrfs-progs-devel util-linux nfs-utils conntrack-tools.x86_64
#---- Set bridge-nf-call
echo "net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1" > /etc/sysctl.conf
#---- Add user RKE -----
groupadd docker
adduser rke
echo "rke:praqma" | chpasswd
usermod -aG docker rke
#---- Install our SSH key ----
mkdir -m0700 /home/rke/.ssh/
cat <<EOF >/home/rke/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9F5hTts3U+E10PHRxViM3PX+DZPgBIcL7Uj/Py+udJWehhobnJmj2EoaUYbykm7VdpjImLpjas2Vhb/gNZ+wVWGho1mzWoCPl2fZ7oLXrGdDHXhlyocvfX3XPB6Y1kbFlfh7+4bUaA7w2Dg4x8LO/iXlF34z6IOa2xgx1R70Xc/97lkRMhsKszRBzwGVin6qUqdVmdXg3d0dRUnq039+q8NWUcKAz2w6F/HO7u3N7NhsSLnlpQ9+AztLvHEPeRP6UNex9a8sSHo5Jzc/mjVKGfInfWjp3nru88mwM4UQRbhhW5IeLXgALCa++H4qZw1ivZtVadXBHjK4JMKC1UWD1 rancher#k8s
EOF
### Disabling swap (now and permently)
swapoff -a
sed -i '/^\/swapfile/ d' /etc/fstab
### set permissions
chmod 0600 /home/rke/.ssh/authorized_keys
chown -R rke:rke /home/rke/.ssh
### fix up selinux context
restorecon -R /home/rke/.ssh/authorized_keys
### Install Docker
#yum install docker -y
#systemctl enable docker
%end
If you take a look to virtual-manager GUI you willalways see dracut shell error
I had exactly the same issue. My solution:
for some reason parameter --initrd-inject for virt-install breaks the process.
So I removed it and load kickstart file via network with --extra-args "ks=http://192.168.xxx.xxx:8000/centos8_ks.cfg"
Hint: to run simple web server for this installation you can execute python3 -m http.server 8000 in folder with your kickstart file.
Of course you need to update your kickstart for CentOS-8 according to this - a lot has been changed.
Another option is to create a small floppy image with label OEMDRV and with file ks.cfg on it and attach it as a CDROM: How to automate CentOS7 minimal kickstart installation using OEMDRV volume?
When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]
Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.
I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.