Storage object access with docker FTP getting error 550 - linux

I'm actually using an object storage from scaleway. I want to be able to access it with ftp and be able to do some action. Right now I can access and view files/folders from it, but I can't do action, like rename a file, create a dir...
I'm using CentOS 7 as operating system.
Here is my mounted volume in my host:
drwxrwxr-x. 1 root root 0 Jan 1 1970 mnt
I'm using the following command to create a container :
docker run -d --name ftpd_server -p 21:21 -p 30000-30009:30000-30009 -e "PUBLICHOST=123.123.123.123" -v /mnt:/home/ftpusers/userA stilliard/pure-ftpd:latest
Then I enter in the container with :
docker exec -it ftpd_server /bin/bash
And I create the user
pure-pw useradd userA -f /etc/pure-ftpd/passwd/pureftpd.passwd -m -u ftpuser -d /home/ftpusers/userA
Then I get this when I try to create a dir
And I can see my contents
I'm using stilliard/pure-ftpd as the docker image
I also tried to give ftpuser root privilege and change in /etc/pure-ftpd/passwd/pureftpd.passwd to change from 1000.1000 to 0.0 but the problem persist
I also found in their github an issue which is similar to mine https://github.com/stilliard/docker-pure-ftpd/issues/35#issuecomment-325583705 but I can't make it work.

HTTP response code 550 indicated to me the Scaleway object storage user does not have permissions to complete the requested operation. The first place I would look is at Scaleway / remote host user account permissions. My guess is that the user account does not have the required permissions.

Related

Centos 7 - I can't read /var/run/docker.sock even though the permission is 666

I am trying to setup Docker with Jenkins and I need to read /var/run/docker.sock.
I tried temporarily to set permission 666 on file /var/run/docker.sock but when I try to read it as jenkins user it says permission denied.
As far as I know if file permission is 666 any user can read it.
srw-rw-rw- 1 root docker 0 Oct 17 17:05 docker.sock
drwxr-xr-x 31 root root 1100 Oct 17 17:05 run
Directory permission is not issue, /run directory has permission 755. Selinux is disabled. Jenkins user is part of docker gorup.
I do not know what is the problem.
Kind regards,
Ivan
create jenkins user on your host
get this user id
change ownership of /var/jenkins_home to fetched id.
I found the problem, I was mounting /etc/passwd and /etc/groups to docker container but for some reason docker didn't correctly added jenkins user to docker group inside container.
I had to add group_add: - <docker_group_id> inside docker-compose file. Now everything is working as expected.
I thought that there was some problem with Centos OS but I found out that someone already had this problem documented at this link: Linux user groups missing when user mounted to container
I hope this information will help someone.

Docker Bind Mount: error while creating mount source path, permission denied

I am trying to run the NVIDIA PyTorch container nvcr.io/nvidia/pytorch:22.01-py3 on a Linux system, and I need to mount a directory of the host system (that I have R/W access to) in the container. I know that I need to use bind mounts, and here's what I'm trying:
I'm in a directory /home/<user>/test, which has the directory dir-to-mount. (The <user> account is mine).
docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
Here's the error output:
docker: Error response from daemon: error while creating mount source path '/home/<user>/test/dir-to-mount': mkdir /home/<user>/test: permission denied.
ERRO[0000] error waiting for container: context canceled
As far as I know, docker will only need to create the directory to be mounted if it doesn't exist already. Docker docs:
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.
I suspected that maybe the docker process does not have access; I tried chmod 777 with dir-to-mount as well as with test, but that made no difference.
So what's going wrong?
[Edit 1]
I am able to mount my user's entire home directory with the same command, but cannot mount other directories inside the home directory.
[Edit 2]
Here are the permissions:
home directory: drwx------
test: drwxrwxrwx
dir-to-mount: drwxrwxrwx
Run the command with sudo as:
sudo docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
It appears that I can mount my home directory as a home directory (inside of /home/<username>), and this just works.
docker run -it -v $HOME:$HOME nvcr.io/nvidia/pytorch:22.01-py3
I don't know why the /home/<username> path is special, I've tried looking through the docs but I could not find anything relevant.

Automatically changing the docker container file permissions in a directory in Linux

We have a docker container running in Linux VMs. This container is writing the logs inside a directory in the container.
Container log directory - /opt/log/
This directory in volume mounted to host machine so that all the log files will also be available in host.
Host directory - /var/log/
Here we see container is creating the log files with 600 (-rw-------+) permission. There is no group read permission assigned to these files.
Same permissions are reflecting in host directory also. We need to add group read permission (640) (-rw-r-----+) automatically for all the files getting created in this directory so that other logging agents can read these files.
I have tried setting ACL also for adding this permission on host but these permissions are not getting set for the files inside this directory.
setfacl -Rdm g::r-- /var/log/
Is there a way we can add group read permission automatically for all the files getting created in this host directory?
From the following article,
https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/
There is a parameter to set the user id and the group id for example,
docker run -it --rm --volume $(pwd):/source --workdir /source --user $(id -u):$(id -g) ubuntu
To set the permissions of the user, when starting the container.

inside container container file permission issue for non root user

I am extending a docker image of a program from here and I want to change some configs and create my own docker image. I have written a Dockerfile as follows and replaced the server.xml file in this image:
FROM exoplatform/exo-community
COPY server.xml /opt/exo/conf
RUN chmod 777 /opt/exo/conf/server.xml
When I created the docker image and run an instance from the image, the running program of the container cannot access the file server.xml because its owner is the root user and I see the permission denied error. I tried to change the ownership in the Dockerfile by chmod command but I see the Operation not permitted error. The user of the running container is not the root user and it cannot access the server.xml file that is owned by the root user. How can I resolve this issue?
If this is actually just a config file, I wouldn't build a custom image around it. Instead, use the docker run -v option to inject it at runtime
docker run \
-v $PWD/server.xml:/opt/exo/conf/server.xml \
... \
exoplatform/exo-community
(You might still hit the same permission issues.)
In your Dockerfile approach, the base image runs as an alternate USER but a COPY instruction by default makes files owned by root. As of relatively recent Docker (18.03; if you're using Docker 1.13 on CentOS/RHEL 7 this won't work) you should be able to
COPY --chown=exo server.xml /opt/exo/conf
Or if that won't work, you can explicitly switch to the root user and back
COPY server.xml /opt/exo/conf
USER root
RUN chown exo /opt/exo/conf/server.xml
USER exo

How to fix denied permission to access a directory if that directory was added during docker build?

I using the following Dockerfile to extend a docker image:
FROM solr:6.6
COPY --chown=solr:solr ./services-core/search/A12Core /A12Core/
Note that solr:6.6 has a USER solr statement.
When running a container built from that Dockerfile I get a permission denied when trying to access a file or directory under /A12Core:
$ docker run -it 2f3c58f093e6 /bin/bash
solr#c091f0cd9127:/opt/solr$ cd /A12Core
solr#c091f0cd9127:/A12Core$ cd conf
bash: cd: conf: Permission denied
solr#c091f0cd9127:/A12Core$ ls -l
total 8
drw-r--r-- 3 solr solr 4096 Aug 31 14:21 conf
-rw-r--r-- 1 solr solr 158 Jun 28 14:25 core.properties
solr#c091f0cd9127:/A12Core$ whoami
solr
solr#c091f0cd9127:/A12Core$
What do I need to do in order to get permission to access the fiels and folders in the /A12Core directory?
Note that I'm running the docker build from windows 7. My docker version is 18.03.0-ce.
Your directory does not have execute permission:
drw-r--r-- 3 solr solr 4096 Aug 31 14:21 conf
Without that, you cannot cd into the directory according to Linux filesystem permissions. You can fix that in your host with a chmod:
chmod +x conf
If you perform this command inside your Dockerfile (with a RUN line), it will result in any modified file being copied to a new layer, so if you run this recursively, it could double the size of your image, hence the suggestion to fix it on your build host if possible.
I had another answer here, which was wrong (but still solved your problem :), but now I see the typo in your Dockerfile. Let's take a look at this line.
COPY --chown=solr:solr ./services-core/search/A12Core /A12Core/
The COPY command checks if the target path in the container exists. If not, it creates it, before copying.
It takes A12Core from ./services-core/search.
Then it checks if path /A12Core exists.
Obviously, it does not. So, the command creates it with permissions root:root.
Lastly, it copies contents of A12Core to newly created A12Core.
In the end your have everything in /A12Core, but it belongs to root and you can't access it.
Since solr docker image already sets USER solr, the way to go would be
RUN mkdir /A12Core
COPY ./services-core/search/A12Core /A12Core
As the docs say
The USER instruction sets the user name ... the user group ... for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.

Resources