Incorrect permissions for file with docker compose volume? 13: Permission denied - linux

I have the following docker_compose.yaml:
version: "3.8"
services:
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ../nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "8050:8050"
- "8051:8051"
webapp:
image: my-site
command: --port 8050 8051 --debug yes
volumes:
- /home/user/data:/data
ports:
- "8050:8050"
- "8051:8051"
depends_on:
- reverse-proxy
When I run via docker compose I get the following error:
$ sudo docker-compose -f /home/user/docker_compose.yaml up
...
reverse_proxy | 2022/03/09 00:49:19 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (13: Permission denied)
reverse_proxy | nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (13: Permission denied)
reverse_proxy exited with code 1
So to investigate I re-ran just the nginx container:
$ sudo docker run -v ../nginx/nginx.conf:/etc/nginx/nginx.conf -t docker.io/nginx tail -f /dev/null
ssh'd in and I see:
root#d8e84f89fcad:/# ls -la /etc/nginx/
ls: cannot access '/etc/nginx/nginx.conf': Permission denied
total 20
drwxr-xr-x. 3 root root 132 Mar 1 14:00 .
drwxr-xr-x. 1 root root 66 Mar 9 00:54 ..
drwxr-xr-x. 2 root root 26 Mar 1 14:00 conf.d
-rw-r--r--. 1 root root 1007 Jan 25 15:03 fastcgi_params
-rw-r--r--. 1 root root 5349 Jan 25 15:03 mime.types
lrwxrwxrwx. 1 root root 22 Jan 25 15:13 modules -> /usr/lib/nginx/modules
-?????????? ? ? ? ? ? nginx.conf
-rw-r--r--. 1 root root 636 Jan 25 15:03 scgi_params
-rw-r--r--. 1 root root 664 Jan 25 15:03 uwsgi_params
I consulted the following Q and others and they seem to suggest to just restart the docker service, so I did and I still get ? permissions upon re running.
I assume that this is causing the permission error? If so, how can I set the correct permissions on this nginx config file? Is this really a volume permission issue?
Versions:
Docker version 1.13.1, build 7d71120/1.13.1
docker-compose version 1.29.2, build 5becea4c
CentOS 7

I think it was an SELinux thing, appending :z to the volume fixed it.
volumes:
- ../nginx/nginx.conf:/etc/nginx/nginx.conf:z

Related

Getting file permission error in docker container ERROR in [eslint] EACCES: permission denied, open '/app/node_modules/.cache/.eslintcache'

I am trying to create a react app wihout installing anything in my machine and for that I have used this Dockerfile
FROM node:18.12
# Create app directory
WORKDIR /app
# Install app dependencies
COPY package.json package-lock.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["sudo", "npm", "start"]
and following is my docker-compose.yml
version: '3'
services:
ui:
build: ./ui
container_name: UI
ports:
- "3000:3000"
volumes:
- "./ui:/app"
- "/app/node_modules"
when I am doing docker-compose up
I am getting the following error
UI | Failed to compile.
UI |
UI | [eslint] EACCES: permission denied, open '/app/node_modules/.cache/.eslintcache'
UI | ERROR in [eslint] EACCES: permission denied, open '/app/node_modules/.cache/.eslintcache'
UI |
UI | webpack compiled with 1 error
and here's the wsl2 file structure and permissions
drwxr-xr-x 5 blank blank 4096 Jan 6 18:40 .
drwxr-xr-x 5 blank blank 4096 Jan 6 18:54 ..
-rw-r--r-- 1 blank blank 310 Jan 6 18:38 .gitignore
-rw-r--r-- 1 blank blank 209 Jan 6 18:55 Dockerfile
-rw-r--r-- 1 blank blank 3359 Jan 6 18:38 README.md
drwxr-xr-x 2 root root 4096 Jan 6 18:40 node_modules
-rw-r--r-- 1 blank blank 665979 Jan 6 18:38 package-lock.json
-rw-r--r-- 1 blank blank 827 Jan 6 18:38 package.json
drwxr-xr-x 2 blank blank 4096 Jan 6 18:38 public
drwxr-xr-x 2 blank blank 4096 Jan 6 18:38 src
thanks.

docker can't stat directory on external device

Briefly
I'm looking to build docker image from a dockerfile in a directory on an external device.
Context
I have an empty directory /media/nathan/ext/test except for Dockerfile
Dockerfile is : FROM alpine
docker version is : Docker version 20.10.8, build 3967b7d28e
OS is Ubuntu 21.10
I am part of the docker group
mount options :
$> findmnt /media/nathan/ext
TARGET SOURCE FSTYPE OPTIONS
/media/nathan/ext /dev/sda1 ext4 rw,nosuid,nodev,relatime
docker deamon
$> ps aux | grep dockerd
root 919 0.0 0.5 2166356 85600 ? Ssl 09:03 0:08 dockerd --group docker --exec-root=/run/snap.docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/run/snap.docker/docker.pid --config-file=/var/snap/docker/1125/config/daemon.json
nathan 19756 0.0 0.0 11844 2448 pts/0 S+ 11:44 0:00 grep --color=auto dockerd
$DOCKER_HOST is undefined
$> echo $DOCKER_HOST
docker info
$> docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 263
Server Version: 20.10.8
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e25210fe30a0a703442421b0f60afac609f950a3
runc version:
init version: de40ad0
Security Options:
Expected result
I get a docker image
True result
$> docker build .
error checking context: 'can't stat '/media/nathan/ext/test''.
What I have tried
Just sudo everything
$> sudo docker build .
error checking context: 'can't stat '/media/nathan/ext/test''.
Issue is not resolved
Am I the owner of the context folder ?
$> echo $USER
nathan
$> ls -la
total 12
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:33 .
drwxr-xr-x 8 nathan root 4096 nov. 12 09:39 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:32 Dockerfile
As per command above, I am the owner of the context directory. Am I missing something ?
add everything to .dockerignore
I've created a .dockerignore that matches everything : '*'.
Running the command [sudo] docker build . gives a very baffling answer:
$> sudo docker build .
open /media/nathan/ext/test/.dockerignore: permission denied
I do not understand how sudo doesn't have the necessary permissions to read (?) the .dockerfile. Permission which I have set to 777 out of astonishement :
ls -la
total 16
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:41 .
drwxr-xr-x 8 nathan root 4096 nov. 12 09:39 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:32 Dockerfile
-rwxrwxrwx 1 nathan nathan 2 nov. 12 10:41 .dockerignore
of course, other programms were capable of reading the file without any issue as expected
$> cat .dockerignore
*
Build outside of external drive
$> pwd
/home/nathan/Bureau/test
$> ls -la
total 12
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:58 .
drwxr-xr-x 3 nathan nathan 4096 nov. 12 10:56 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:58 Dockerfile
$> docker build .
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM alpine
---> 14119a10abf4
Successfully built 14119a10abf4
Image is built, but I which to replicate result into external drive.
running docker build . with journalctl
[...]
nov. 12 11:42:52 nathan-pc systemd[1746]: Started snap.docker.docker.ba3da9ef-34ee-4a63-8ff4-6a56327c5cd2.scope.
nov. 12 11:42:52 nathan-pc audit[19690]: AVC apparmor="DENIED" operation="open" profile="snap.docker.docker" name="/media/nathan/ext/workspace/dino/ntrip-client/RTKLIB/" pid=19690 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
nov. 12 11:42:52 nathan-pc kernel: audit: type=1400 audit(1636713772.367:93): apparmor="DENIED" operation="open" profile="snap.docker.docker" name="/media/nathan/ext/workspace/dino/ntrip-client/RTKLIB/" pid=19690 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
nov. 12 11:42:52 nathan-pc systemd[1746]: snap.docker.docker.ba3da9ef-34ee-4a63-8ff4-6a56327c5cd2.scope: Deactivated successfully.
[...]
Thank you for your time

Permission for singularity

I am got an issue when running the whole pipeline of ChIP-seq using profile singularity on my local PC (window but subsystem Linux)
Error executing process > 'output_documentation'
Caused by:
Failed to pull singularity image
command: singularity pull --name nfcore-chipseq-1.2.2.img.pulling.1630098407814 docker://nfcore/chipseq:1.2.2 > /dev/null
status : 255
message:
INFO: Using cached SIF image
FATAL: While making image from oci registry: error copying image out of cache: could not open temporary file for copy: failed to change permission of ./tmp-copy-2575820807: chmod ./tmp-copy-2575820807: operation not permitted
I'm using singularity 3.8.2
I also have specified NXF_SINGULARITY_CACHEDIR to a hard drive instead of /home/.singularity
I also checked the folder to make sure all the file can be accessed
total 0
drwxrwxrwx 1 root root 4096 Aug 28 05:06 .
drwxrwxrwx 1 root root 4096 Aug 28 04:47 ..
-rwxrwxrwx 1 root root 0 Aug 28 04:53 tmp-copy-2299332276
-rwxrwxrwx 1 root root 0 Aug 28 05:06 tmp-copy-2575820807

docker-compose gives mongo error about not owning the data folder

Based on the following files:
Dockerfile
FROM node:8-alpine
RUN apk add --no-cache ffmpeg
RUN apk add --no-cache git
RUN apk add --no-cache tar
WORKDIR /app/
COPY package*.json /app/
COPY bower.json /app/
RUN npm i
RUN npm i -g bower
RUN bower install --allow-root
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
docker-compose-yml
version: '3.1'
services:
node:
container_name: nodetube
build:
context: .
dockerfile: Dockerfile
ports:
- "49161:3000"
volumes:
- .:/app/
- /app/node_modules
- ./upload:/app/upload
- ./uploads:/app/uploads
environment:
- REDIS_HOST=redis
- MONGODB_DOCKER_URI=mongodb://nodetube-mongo:27017/nodetube
depends_on:
- redis
- mongo
command: npm start
networks:
- nodetube-network
mongo:
container_name: nodetube-mongo
image: mongo:3.6
volumes:
- ./data/db:/data/db
ports:
- "27011:27017"
networks:
- nodetube-network
redis:
container_name: nodetube-redis
image: redis
networks:
- nodetube-network
networks:
nodetube-network:
driver: bridge
.dockerignore
.*
docker-compose.yml
*.md
node_modules
npm-debug.log
Running $ docker-compose up --build gives me an error:
nodetube-mongo | 2020-01-06T19:33:08.815+0000 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
I'm not a Docker expert, how can I get this to work? Thanks
On my local machine if I run $ ls -al /data
I receive:
total 0
drwxr-xr-x 4 root wheel 128 May 13 2018 .
drwxr-xr-x 33 root wheel 1056 Apr 11 2019 ..
drwxrwxrwx 432 anthony wheel 13824 Jan 6 12:21 db
drwxr-xr-x 385 anthony wheel 12320 May 13 2018 db2
anthony at Anthonys-MacBook-Pro in /data/db
$ ls -al
total 17565016
drwxrwxrwx 432 anthony wheel 13824 Jan 6 12:21 .
drwxr-xr-x 4 root wheel 128 May 13 2018 ..
-rw--w--w- 1 anthony wheel 48 May 13 2018 WiredTiger
-rw--w--w- 1 anthony wheel 21 May 13 2018 WiredTiger.lock
-rw--w--w- 1 anthony wheel 1088 Jan 6 12:21 WiredTiger.turtle
-rw--w--w- 1 anthony wheel 1216512 Jan 6 12:21 WiredTiger.wt
-rw--w--w- 1 anthony wheel 4096 Jan 6 12:20 WiredTigerLAS.wt

convert spring boot tomcat azure k8s deployment to standalone application

I have created an azure devops project for java , spring boot and kubernetes as a way to learn about the azure technology set. It does work , the simple spring boot web application is deployed and runs and is rebuilt if I make code changes.
However the spring boot application uses a very old version of spring 1.5.7.RELEASE and it is deployed in a tomcat server in k8s.
I am looking for some guidance on how to run it as a standalone spring boot version 2 application in kubernetes. My attempts so far have resulted in the deployment timing out after 15 minutes in the Helm Upgrade step.
The existing docker file
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM tomcat:8
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY --from=build-env /app/target/*.war /usr/local/tomcat/webapps/ROOT.war
How to change the dockerfile to build the image of a standalone spring boot app?
I changed the pom to generate a jar file, then modified the docker file to this:
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY --from=build-env /app/target/ROOT.jar .
RUN ls -la
ENTRYPOINT ["java","-jar","ROOT.jar"]
This builds, see output from the log for 'Build an image' step
...
2019-06-25T23:33:38.0841365Z Step 9/20 : COPY --from=build-env /app/target/ROOT.jar .
2019-06-25T23:33:41.4839851Z ---> b478fb8867e6
2019-06-25T23:33:41.4841124Z Step 10/20 : RUN ls -la
2019-06-25T23:33:41.6653383Z ---> Running in 4618c503ac5c
2019-06-25T23:33:42.2022890Z total 50156
2019-06-25T23:33:42.2026590Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 .
2019-06-25T23:33:42.2026975Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 ..
2019-06-25T23:33:42.2027267Z -rwxr-xr-x 1 root root 0 Jun 25 23:33 .dockerenv
2019-06-25T23:33:42.2027608Z -rw-r--r-- 1 root root 51290350 Jun 25 23:33 ROOT.jar
2019-06-25T23:33:42.2027889Z drwxr-xr-x 2 root root 4096 May 9 20:49 bin
2019-06-25T23:33:42.2028188Z drwxr-xr-x 5 root root 340 Jun 25 23:33 dev
2019-06-25T23:33:42.2028467Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 etc
2019-06-25T23:33:42.2028765Z drwxr-xr-x 2 root root 4096 May 9 20:49 home
2019-06-25T23:33:42.2029376Z drwxr-xr-x 1 root root 4096 May 11 01:32 lib
2019-06-25T23:33:42.2029682Z drwxr-xr-x 5 root root 4096 May 9 20:49 media
2019-06-25T23:33:42.2029961Z drwxr-xr-x 2 root root 4096 May 9 20:49 mnt
2019-06-25T23:33:42.2030257Z drwxr-xr-x 2 root root 4096 May 9 20:49 opt
2019-06-25T23:33:42.2030537Z dr-xr-xr-x 135 root root 0 Jun 25 23:33 proc
2019-06-25T23:33:42.2030937Z drwx------ 2 root root 4096 May 9 20:49 root
2019-06-25T23:33:42.2031214Z drwxr-xr-x 2 root root 4096 May 9 20:49 run
2019-06-25T23:33:42.2031523Z drwxr-xr-x 2 root root 4096 May 9 20:49 sbin
2019-06-25T23:33:42.2031797Z drwxr-xr-x 2 root root 4096 May 9 20:49 srv
2019-06-25T23:33:42.2032254Z dr-xr-xr-x 12 root root 0 Jun 25 23:33 sys
2019-06-25T23:33:42.2032355Z drwxrwxrwt 2 root root 4096 May 9 20:49 tmp
2019-06-25T23:33:42.2032656Z drwxr-xr-x 1 root root 4096 May 11 01:32 usr
2019-06-25T23:33:42.2032945Z drwxr-xr-x 1 root root 4096 May 9 20:49 var
2019-06-25T23:33:43.0909881Z Removing intermediate container 4618c503ac5c
2019-06-25T23:33:43.0911258Z ---> 0d824ce4ae62
2019-06-25T23:33:43.0911852Z Step 11/20 : ENTRYPOINT ["java","-jar","ROOT.jar"]
2019-06-25T23:33:43.2880002Z ---> Running in bba9345678be
...
The build completes but deployment fails in the Helm Upgrade step, timing out after 15 minutes. This is the log
2019-06-25T23:38:06.6438602Z ##[section]Starting: Helm upgrade
2019-06-25T23:38:06.6444317Z ==============================================================================
2019-06-25T23:38:06.6444448Z Task : Package and deploy Helm charts
2019-06-25T23:38:06.6444571Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running helm commands
2019-06-25T23:38:06.6444648Z Version : 0.153.0
2019-06-25T23:38:06.6444927Z Author : Microsoft Corporation
2019-06-25T23:38:06.6445006Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/helm-deploy
2019-06-25T23:38:06.6445300Z ==============================================================================
2019-06-25T23:38:09.1285973Z [command]/opt/hostedtoolcache/helm/2.14.1/x64/linux-amd64/helm upgrade --tiller-namespace dev2134 --namespace dev2134 --install --force --wait --set image.repository=stephenacr.azurecr.io/stephene991 --set image.tag=20 --set applicationInsights.InstrumentationKey=643a47f5-58bd-4012-afea-b3c943bc33ce --set imagePullSecrets={stephendockerauth} --timeout 900 azuredevops /home/vsts/work/r1/a/Drop/drop/sampleapp-v0.2.0.tgz
2019-06-25T23:53:13.7882713Z UPGRADE FAILED
2019-06-25T23:53:13.7883396Z Error: timed out waiting for the condition
2019-06-25T23:53:13.7885043Z Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7967270Z ##[error]Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7976964Z ##[section]Finishing: Helm upgrade
I have had another look at this as I now am more familiar with all the technologies, and I have located the problem.
The helm upgrade statement is timing out waiting for the newly deployed pod to become live but this doesn’t happen because the k8s liveness probe defined for the pod is not working. This can be seen with this command :
kubectl get po -n dev5998 -w
NAME READY STATUS RESTARTS AGE
sampleapp-86869d4d54-nzd9f 0/1 CrashLoopBackOff 17 48m
sampleapp-c8f84c857-phrrt 1/1 Running 0 1h
sampleapp-c8f84c857-rmq8w 1/1 Running 0 1h
tiller-deploy-79f84d5f-4r86q 1/1 Running 0 2h
The new pod is repeatedly restarted then killed. It seems to repeat forever or until another deployment is run.
In the log for the pod
kubectl describe po sampleapp-86869d4d54-nzd9f -n dev5998
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39m default-scheduler Successfully assigned sampleapp-86869d4d54-nzd9f to aks-agentpool-24470557-1
Normal SuccessfulMountVolume 39m kubelet, aks-agentpool-24470557-1 MountVolume.SetUp succeeded for volume "default-token-v72n5"
Normal Pulling 39m kubelet, aks-agentpool-24470557-1 pulling image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Pulled 39m kubelet, aks-agentpool-24470557-1 Successfully pulled image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Created 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Created container
Normal Started 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Started container
Normal Killing 37m (x2 over 38m) kubelet, aks-agentpool-24470557-1 Killing container with id docker://sampleapp:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 36m (x6 over 38m) kubelet, aks-agentpool-24470557-1 Liveness probe failed: HTTP probe failed with statuscode: 404
Warning Unhealthy 34m (x12 over 38m) kubelet, aks-agentpool-24470557-1 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 9m25s (x12 over 38m) kubelet, aks-agentpool-24470557-1 Container image "devopssampleacreg.azurecr.io/devopssamplec538:52" already present on machine
Warning BackOff 4m10s (x112 over 34m) kubelet, aks-agentpool-24470557-1 Back-off restarting failed container
So there must be a difference in what urls are delivered by the application depending on how it is deployed, tomcat or standalone. Which now seems obvious.

Resources