Docker volumes on CentOS 7 - linux

I have run into a problem on CentOS 7 when attempting to map a volume to the host in a tomcat container. This happens with the public tomcat images as well as an image I have created (based on centos instead of debian).
instantiating a container as follows will succeed:
docker run -it -d tomcat:8
instantiating a container as follows will succeed, but with errors in the log and logs are not written to the host:
docker run -it -d -v /usr/local/tomcat:/usr/local/tomcat tomcat:8
[wpackard#eagle2 tomcat]$ dkr run -it -d -v
/usr/local/tomcat:/usr/local/tomcat tomcat:8
34075701b1436f83a24212170b4d2113ae698df244c449203b1c9af9814485c9
[wpackard#eagle2 tomcat]$ dkr ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34075701b143 tomcat:8 "catalina.sh run" 5 seconds ago Up 4 seconds 8080/tcp sharp_einstein
[wpackard#eagle2 tomcat]$ dkr logs sharp_einstein
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
java.util.logging.ErrorManager: 4
java.io.FileNotFoundException: /usr/local/tomcat/logs/catalina.2015-03-31.log (Permission denied)
...
31-Mar-2015 15:32:04.088 SEVERE [Catalina-startStop-1] org.apache.catalina.startup.HostConfig.start Unable to create directory for deployment: /usr/local/tomcat/conf/Catalina/localhost
31-Mar-2015 15:32:04.097 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/ROOT
31-Mar-2015 15:32:04.468 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/ROOT] for context []
31-Mar-2015 15:32:05.966 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/ROOT is unusable.
31-Mar-2015 15:32:06.042 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat/webapps/ROOT has finished in 1,929 ms
31-Mar-2015 15:32:06.043 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/docs
31-Mar-2015 15:32:06.093 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/docs] for context [/docs]
31-Mar-2015 15:32:06.216 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/docs is unusable.
31-Mar-2015 15:32:06.219 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat/webapps/docs has finished in 176 ms
31-Mar-2015 15:32:06.220 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/examples
31-Mar-2015 15:32:06.272 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/examples] for context [/examples]
31-Mar-2015 15:32:07.952 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/examples is unusable.
[wpackard#eagle2 tomcat]$
Exec'ing to the container and attempting to write also fails.
[wpackard#eagle2 tomcat]$ dkr ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34075701b143 tomcat:8 "catalina.sh run" 5 minutes ago Up 5 minutes 8080/tcp sharp_einstein
[wpackard#eagle2 tomcat]$ dkr exec -it sharp_einstein /bin/bash
root#34075701b143:/usr/local/tomcat# ls -l
total 96
-rw-rw-r--. 1 root root 56977 Jan 23 11:59 LICENSE
-rw-rw-r--. 1 root root 1397 Jan 23 11:59 NOTICE
-rw-rw-r--. 1 root root 6779 Jan 23 11:59 RELEASE-NOTES
-rw-rw-r--. 1 root root 16204 Jan 23 11:59 RUNNING.txt
drwxrwxr-x. 2 root root 4096 Mar 31 12:14 bin
drwxrwxr-x. 2 root root 4096 Jan 23 11:59 conf
drwxrwxr-x. 2 root root 4096 Mar 31 12:14 lib
drwxrwxr-x. 2 root root 6 Jan 23 11:56 logs
drwxrwxr-x. 2 root root 29 Mar 31 12:14 temp
drwxrwxr-x. 7 root root 76 Jan 23 11:57 webapps
drwxrwxr-x. 2 root root 6 Jan 23 11:56 work
root#34075701b143:/usr/local/tomcat# cd logs
root#34075701b143:/usr/local/tomcat/logs# echo "test" > test.log
bash: test.log: Permission denied
I have created an instance of the postgresql container on centos and that successfully maps and uses the volume, verified by creating a db, stopping the instance and then re-running the container.
[wpackard#eagle2 ~]$ uname --all
Linux eagle2 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[wpackard#eagle2 ~]$
dkr is an alias to docker, I have created a docker group and added myself to the group to eliminate the need for sudo.
The volume mapping seems to work correctly on ubuntu. On CentOS I have tried both the package version (as below), and also updating it to 1.5.
[wpackard#eagle2 ~]$ dkr --version
Docker version 1.3.2, build 39fa2fa/1.3.2
[wpackard#eagle2 ~]$
How do I make volumes work on CentOS?

I think your volumes are working :-) You have a permission problem. I run into this fairly often with the mapping of user id between the host and the container. On your host, if you look at /usr/local/tomcat (ls -ld), you will see a owner, group and the permissions. You probably have something like 0755 (read/write/exec by owner, read/exec by group, read/exec by world. You can test this theory easily, simple remember the current settings for /usr/local/tomcat/logs, then do:
chmod 777 /usr/local/tomcat/logs
from the docker host (not the container). Then run your test on the container, the Permission denied should evaporate.
This is NOT a good fix, though. I don't know what the community says about user id mapping for docker. One thing you could do is figure out the user and group in your host for that directory. Then, when you create your image (or at run time) create a user with the same id and a group with the same id in the container. Then run your tomcat service using that user in the container.

This is due to SELinux.
You must attach correct type to host directory:
host$ chcon -Rt svirt_sandbox_file_t /usr/local/tomcat

Related

chown not working when coping a file in a dockerfile

I'm running docker engine on windows and am trying to add my own file to the image. Problem is that when I copy the file its ownership is always root:root but it needs to be heartbeat:heartbeat (exisitng user on image). Mounting a single file with the -v parameter und docker run doesn't seam to be possible on windows atm. Thats why I tried to create my own image with a docker file:
FROM docker.elastic.co/beats/heartbeat:7.16.3
USER root
COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
The --chown parameter behind the coping does nothing. It is still root when I check and the RUN chown command results in a error. Here the output:
docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 10.75kB
Step 1/4 : FROM docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (windows/amd64) and no specific platform was requested
---> Running in 2ae3bfdd5422
The command '/bin/sh -c chown -R heartbeat:heartbeat /usr/share/heartbeat' returned a non-zero code: 4294967295: failed to shutdown container: container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110): subsequent terminate failed container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110)
All help is welcome...
Running with --platform:
PS C:\SynteticMonitoring> docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 9.728kB
Step 1/4 : FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chmod +r /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> e9a075d2ab53
Successfully built e9a075d2ab53
Successfully tagged custom/heartbeat:7.16.3
PS C:\SynteticMonitoring> docker run --interactive --tty --entrypoint /bin/sh custom/heartbeat:7.16.3
sh-4.2# ls -l
total 106916
-rw-r--r-- 1 root root 13675 Jan 7 00:47 LICENSE.txt
-rw-r--r-- 1 root root 1964303 Jan 7 00:47 NOTICE.txt
-rw-r--r-- 1 root root 851 Jan 7 00:47 README.md
drwxrwxr-x 2 root root 4096 Jan 7 00:48 data
-rw-r--r-- 1 root root 374197 Jan 7 00:47 fields.yml
-rwxr-xr-x 1 root root 107027952 Jan 7 00:47 heartbeat
-rw-r--r-- 1 root root 69196 Jan 7 00:47 heartbeat.reference.yml
-rw-rw-rw- 1 root root 1631 Jan 26 06:49 heartbeat.yml
drwxr-xr-x 2 root root 4096 Jan 7 00:47 kibana
drwxrwxr-x 2 root root 4096 Jan 7 00:48 logs
drwxr-xr-x 2 root root 4096 Jan 7 00:47 monitors.d
sh-4.2# pwd
/usr/share/heartbeat
You can't chown of a file to a user that does not exist. It seems that the heartbeat user and group do not exist in your base image.
That's why the COPY --chown does nothing and you get files owned by root.
You can fix this by creating the user before COPYing. To do this, add a line before your COPY statement, such as:
RUN addgroup heartbeat && adduser -S -H heartbeat -G heartbeat
If you don't have addgroup and adduser in your base image, try alternative:
RUN useradd -rUM -s /usr/sbin/nologin heartbeat
This will create the group and user heartbeat and then chown will be able to successfully change the ownership.
According to Dockerfile documentation:
The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64, linux/arm64, or windows/amd64. By default, the target platform of the build request is used.
I suggest try something like:
FROM [--platform=<platform>] <image> [AS <name>]
FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3

Permission for singularity

I am got an issue when running the whole pipeline of ChIP-seq using profile singularity on my local PC (window but subsystem Linux)
Error executing process > 'output_documentation'
Caused by:
Failed to pull singularity image
command: singularity pull --name nfcore-chipseq-1.2.2.img.pulling.1630098407814 docker://nfcore/chipseq:1.2.2 > /dev/null
status : 255
message:
INFO: Using cached SIF image
FATAL: While making image from oci registry: error copying image out of cache: could not open temporary file for copy: failed to change permission of ./tmp-copy-2575820807: chmod ./tmp-copy-2575820807: operation not permitted
I'm using singularity 3.8.2
I also have specified NXF_SINGULARITY_CACHEDIR to a hard drive instead of /home/.singularity
I also checked the folder to make sure all the file can be accessed
total 0
drwxrwxrwx 1 root root 4096 Aug 28 05:06 .
drwxrwxrwx 1 root root 4096 Aug 28 04:47 ..
-rwxrwxrwx 1 root root 0 Aug 28 04:53 tmp-copy-2299332276
-rwxrwxrwx 1 root root 0 Aug 28 05:06 tmp-copy-2575820807

Tomcat is not getting started: Permission denied

I am getting below error when trying to start the tomcat using systemd service
systemd[1]: tomcat.service: Failed to execute command: Permission denied
systemd[1]: tomcat.service: Failed at step EXEC spawning /opt/tomcat/bin/startup.sh: Permission denied
Below is my tomcat.service configuration
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target
[Service]
Type=forking
Environment=JAVA_HOME=/usr/lib/jvm/jre
Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment='CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/bin/kill -15 $MAINPID
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target
These are my permission on files in the bin directory
drwxrwx---. 2 tomcat tomcat 4096 Mar 22 05:56 .
drwx------. 9 tomcat tomcat 276 Mar 22 05:58 ..
-rw-r-----. 1 tomcat tomcat 35071 Mar 11 09:33 bootstrap.jar
-rw-r-----. 1 tomcat tomcat 15953 Mar 11 09:33 catalina.bat
-rwxr-x--x. 1 tomcat tomcat 23792 Mar 11 09:33 catalina.sh
-rw-r-----. 1 tomcat tomcat 1664 Mar 11 09:36 catalina-tasks.xml
-rw-r-----. 1 tomcat tomcat 2123 Mar 11 09:33 ciphers.bat
-rwxr-x--x. 1 tomcat tomcat 1997 Mar 11 09:33 ciphers.sh
-rw-r-----. 1 tomcat tomcat 25197 Mar 11 09:33 commons-daemon.jar
-rw-r-----. 1 tomcat tomcat 206895 Mar 11 09:33 commons-daemon-native.tar.gz
-rw-r-----. 1 tomcat tomcat 2040 Mar 11 09:33 configtest.bat
-rwxr-x--x. 1 tomcat tomcat 1922 Mar 11 09:33 configtest.sh
-rwxr-x--x. 1 tomcat tomcat 8675 Mar 11 09:33 daemon.sh
-rw-r-----. 1 tomcat tomcat 2091 Mar 11 09:33 digest.bat
-rwxr-x--x. 1 tomcat tomcat 1965 Mar 11 09:33 digest.sh
-rw-r-----. 1 tomcat tomcat 3606 Mar 11 09:33 makebase.bat
-rwxr-x--x. 1 tomcat tomcat 3382 Mar 11 09:33 makebase.sh
-rw-r-----. 1 tomcat tomcat 3460 Mar 11 09:33 setclasspath.bat
-rwxr-x--x. 1 tomcat tomcat 3708 Mar 11 09:33 setclasspath.sh
-rw-r-----. 1 tomcat tomcat 2020 Mar 11 09:33 shutdown.bat
-rwxr-x--x. 1 tomcat tomcat 1902 Mar 11 09:33 shutdown.sh
-rw-r-----. 1 tomcat tomcat 2022 Mar 11 09:33 startup.bat
-rwxr-x--x. 1 tomcat tomcat 1904 Mar 11 09:33 startup.sh
-rw-r-----. 1 tomcat tomcat 49372 Mar 11 09:33 tomcat-juli.jar
-rw-r-----. 1 tomcat tomcat 419428 Mar 11 09:33 tomcat-native.tar.gz
-rw-r-----. 1 tomcat tomcat 4574 Mar 11 09:33 tool-wrapper.bat
NOTE: I am able to start the tomcat using sudo ./startup.sh command by navigating to bin directory
Can you check your /opt and /opt/bin permissions
Looks like
chmod a+rx /opt /opt/tomcat/ /opt/tomcat/bin
should help
I suppose you followed one of the many copied online tutorials where the tomcat user is made with /opt/tomcat/ as its home directory by using something similar like:
sudo useradd -d /opt/tomcat -s /sbin/nologin tomcat
SELinux is preventing applications from being launched from a home directory, with a message like the following in /var/log/audit/audit.log
type=AVC msg=audit(1614250994.710:33614): avc: denied { execute } for pid=60244 comm="(artup.sh)" name="startup.sh" dev="dm-3" ino=19000615 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=file permissive=0
I don't believe the tomcat user needs a home folder, so either remove it from an existing user with:
sudo usermod -d / tomcat
Or create your new user with the following instead:
sudo useradd -M -s /sbin/nologin tomcat
Reset the SELinux properties with the following afterwards:
sudo restorecon -rv /opt/tomcat
I encountered same problem and fix it by restorecon.
I don't know if the reason why the problem happened is same as the original question but I think it depends on how to install tomcat.
In general, we download the tar.gz onto a temp directory and tar xzvf at the temp directory. Next, we move it to /opt or /usr/local. At that time, if we use mv, SELinux context is not changed then permission denied happens. But you can change it by restorecon. If we use cp -R, SELinux context is changed then permission denied does not happen.
In case someone follows the google links to get here, there were three problems in my case that prevented Tomcat 9 (installed from TAR file) from starting on a RHEL 8 system that has CIS recommended security lock-downs on it. I think the DoD STIGs are similar, but not sure. I had the exact same messages in the system journal that the OP did.
First, our security folks went overboard and added the "noexec" option to the mount that the Tomcat was on, which is a separate partition and LVM volume for both security and organizational reasons. I had to modify the mount by removing the "noexec" option in the "/etc/fstab" file, to whit:
Before:
/dev/mapper/vg01-mymount /mymount xfs defaults,nodev,noexec 0 0
After:
/dev/mapper/vg01-mymount /mymount xfs defaults,nodev 0 0
Second, I found they had installed the "fapolicyd" daemon, and that acts like an application allow-listing for execution and access to files. Instead of using the standard method of adding individual binaries to a list in "/etc/fapolicyd/fapolicyd.trust", or creating files in "/etc/fapolicyd/trust.d/" directory, I followed recommendations from this reply on a blog entry here:https://computingforgeeks.com/install-apache-tomcat-9-on-linux-rhel-centos/#comment-7841 . This is the coward's way out, by adding all policy permissions for the tomcat user to access the whole tomcat directory, and depending on file-level permissions to do the security from there:
allow perm=any uid=tomcat gid=tomcat : dir=/mymount/tomcat/
I'm not really sure this will pass scrutiny with any security policies where you work, but it gets the thing running. Individual rules for fapolicyd can be made to run specific files, certain MIME types, read-only on whole directories, etc. The major flaw I found is that the logging from the daemon is less than stellar (or non-existent in my case), and left me scratching my head for a couple days as to what was blocking Tomcat starting. Just knowing fapolicyd is installed is half the battle won.
Third, checking SELinux reports (aureport binary) showed that the systemd binary context of "init_t" did not have permission to execute files in the Tomcat dir because they had the wrong context ("default_t"). Here I only changed the context of the script files in /tomcat/bin/ to "initrc_exec_t", which also may be bad, but it worked without disabling SELinux or doing weird things like compile a new SELinux policy file that allowed that access (i.e. allow init_t to execute default_t files, which seems like it would be much worse). I used a similar command set to the below:
semanage fcontext --add --type initrc_exec_t /mymount/tomcat/bin/startup.sh
semanage fcontext --add --type initrc_exec_t /mymount/tomcat/bin/shutdown.sh
semanage fcontext --add --type initrc_exec_t /mymount/tomcat/bin/catalina.sh
semanage fcontext --add --type initrc_exec_t /mymount/tomcat/bin/setclasspath.sh
semanage fcontext --add --type initrc_exec_t /mymount/tomcat/bin/setenv.sh
restorecon -rv /mymount/tomcat/
I don't know if it needed the last three (catalina.sh, setclasspath.sh, setenv.sh), but I added them to be sure. This fixed my issue with systemd.

convert spring boot tomcat azure k8s deployment to standalone application

I have created an azure devops project for java , spring boot and kubernetes as a way to learn about the azure technology set. It does work , the simple spring boot web application is deployed and runs and is rebuilt if I make code changes.
However the spring boot application uses a very old version of spring 1.5.7.RELEASE and it is deployed in a tomcat server in k8s.
I am looking for some guidance on how to run it as a standalone spring boot version 2 application in kubernetes. My attempts so far have resulted in the deployment timing out after 15 minutes in the Helm Upgrade step.
The existing docker file
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM tomcat:8
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY --from=build-env /app/target/*.war /usr/local/tomcat/webapps/ROOT.war
How to change the dockerfile to build the image of a standalone spring boot app?
I changed the pom to generate a jar file, then modified the docker file to this:
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY --from=build-env /app/target/ROOT.jar .
RUN ls -la
ENTRYPOINT ["java","-jar","ROOT.jar"]
This builds, see output from the log for 'Build an image' step
...
2019-06-25T23:33:38.0841365Z Step 9/20 : COPY --from=build-env /app/target/ROOT.jar .
2019-06-25T23:33:41.4839851Z ---> b478fb8867e6
2019-06-25T23:33:41.4841124Z Step 10/20 : RUN ls -la
2019-06-25T23:33:41.6653383Z ---> Running in 4618c503ac5c
2019-06-25T23:33:42.2022890Z total 50156
2019-06-25T23:33:42.2026590Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 .
2019-06-25T23:33:42.2026975Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 ..
2019-06-25T23:33:42.2027267Z -rwxr-xr-x 1 root root 0 Jun 25 23:33 .dockerenv
2019-06-25T23:33:42.2027608Z -rw-r--r-- 1 root root 51290350 Jun 25 23:33 ROOT.jar
2019-06-25T23:33:42.2027889Z drwxr-xr-x 2 root root 4096 May 9 20:49 bin
2019-06-25T23:33:42.2028188Z drwxr-xr-x 5 root root 340 Jun 25 23:33 dev
2019-06-25T23:33:42.2028467Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 etc
2019-06-25T23:33:42.2028765Z drwxr-xr-x 2 root root 4096 May 9 20:49 home
2019-06-25T23:33:42.2029376Z drwxr-xr-x 1 root root 4096 May 11 01:32 lib
2019-06-25T23:33:42.2029682Z drwxr-xr-x 5 root root 4096 May 9 20:49 media
2019-06-25T23:33:42.2029961Z drwxr-xr-x 2 root root 4096 May 9 20:49 mnt
2019-06-25T23:33:42.2030257Z drwxr-xr-x 2 root root 4096 May 9 20:49 opt
2019-06-25T23:33:42.2030537Z dr-xr-xr-x 135 root root 0 Jun 25 23:33 proc
2019-06-25T23:33:42.2030937Z drwx------ 2 root root 4096 May 9 20:49 root
2019-06-25T23:33:42.2031214Z drwxr-xr-x 2 root root 4096 May 9 20:49 run
2019-06-25T23:33:42.2031523Z drwxr-xr-x 2 root root 4096 May 9 20:49 sbin
2019-06-25T23:33:42.2031797Z drwxr-xr-x 2 root root 4096 May 9 20:49 srv
2019-06-25T23:33:42.2032254Z dr-xr-xr-x 12 root root 0 Jun 25 23:33 sys
2019-06-25T23:33:42.2032355Z drwxrwxrwt 2 root root 4096 May 9 20:49 tmp
2019-06-25T23:33:42.2032656Z drwxr-xr-x 1 root root 4096 May 11 01:32 usr
2019-06-25T23:33:42.2032945Z drwxr-xr-x 1 root root 4096 May 9 20:49 var
2019-06-25T23:33:43.0909881Z Removing intermediate container 4618c503ac5c
2019-06-25T23:33:43.0911258Z ---> 0d824ce4ae62
2019-06-25T23:33:43.0911852Z Step 11/20 : ENTRYPOINT ["java","-jar","ROOT.jar"]
2019-06-25T23:33:43.2880002Z ---> Running in bba9345678be
...
The build completes but deployment fails in the Helm Upgrade step, timing out after 15 minutes. This is the log
2019-06-25T23:38:06.6438602Z ##[section]Starting: Helm upgrade
2019-06-25T23:38:06.6444317Z ==============================================================================
2019-06-25T23:38:06.6444448Z Task : Package and deploy Helm charts
2019-06-25T23:38:06.6444571Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running helm commands
2019-06-25T23:38:06.6444648Z Version : 0.153.0
2019-06-25T23:38:06.6444927Z Author : Microsoft Corporation
2019-06-25T23:38:06.6445006Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/helm-deploy
2019-06-25T23:38:06.6445300Z ==============================================================================
2019-06-25T23:38:09.1285973Z [command]/opt/hostedtoolcache/helm/2.14.1/x64/linux-amd64/helm upgrade --tiller-namespace dev2134 --namespace dev2134 --install --force --wait --set image.repository=stephenacr.azurecr.io/stephene991 --set image.tag=20 --set applicationInsights.InstrumentationKey=643a47f5-58bd-4012-afea-b3c943bc33ce --set imagePullSecrets={stephendockerauth} --timeout 900 azuredevops /home/vsts/work/r1/a/Drop/drop/sampleapp-v0.2.0.tgz
2019-06-25T23:53:13.7882713Z UPGRADE FAILED
2019-06-25T23:53:13.7883396Z Error: timed out waiting for the condition
2019-06-25T23:53:13.7885043Z Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7967270Z ##[error]Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7976964Z ##[section]Finishing: Helm upgrade
I have had another look at this as I now am more familiar with all the technologies, and I have located the problem.
The helm upgrade statement is timing out waiting for the newly deployed pod to become live but this doesn’t happen because the k8s liveness probe defined for the pod is not working. This can be seen with this command :
kubectl get po -n dev5998 -w
NAME READY STATUS RESTARTS AGE
sampleapp-86869d4d54-nzd9f 0/1 CrashLoopBackOff 17 48m
sampleapp-c8f84c857-phrrt 1/1 Running 0 1h
sampleapp-c8f84c857-rmq8w 1/1 Running 0 1h
tiller-deploy-79f84d5f-4r86q 1/1 Running 0 2h
The new pod is repeatedly restarted then killed. It seems to repeat forever or until another deployment is run.
In the log for the pod
kubectl describe po sampleapp-86869d4d54-nzd9f -n dev5998
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39m default-scheduler Successfully assigned sampleapp-86869d4d54-nzd9f to aks-agentpool-24470557-1
Normal SuccessfulMountVolume 39m kubelet, aks-agentpool-24470557-1 MountVolume.SetUp succeeded for volume "default-token-v72n5"
Normal Pulling 39m kubelet, aks-agentpool-24470557-1 pulling image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Pulled 39m kubelet, aks-agentpool-24470557-1 Successfully pulled image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Created 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Created container
Normal Started 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Started container
Normal Killing 37m (x2 over 38m) kubelet, aks-agentpool-24470557-1 Killing container with id docker://sampleapp:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 36m (x6 over 38m) kubelet, aks-agentpool-24470557-1 Liveness probe failed: HTTP probe failed with statuscode: 404
Warning Unhealthy 34m (x12 over 38m) kubelet, aks-agentpool-24470557-1 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 9m25s (x12 over 38m) kubelet, aks-agentpool-24470557-1 Container image "devopssampleacreg.azurecr.io/devopssamplec538:52" already present on machine
Warning BackOff 4m10s (x112 over 34m) kubelet, aks-agentpool-24470557-1 Back-off restarting failed container
So there must be a difference in what urls are delivered by the application depending on how it is deployed, tomcat or standalone. Which now seems obvious.

Cannot access mounted volume in docker container

I download latest node images from docker, and try to run a container with the following command:
$ sudo docker run -it -v $(pwd)/app:/home/node/app --name node node /bin/bash
Then the container was created and I get into the /home/node/app dir. I tried 'ls' command and get 'permission deny'.
I do search online, someone suggests change owner of app/ at the host to 1000. But it doesn't work.
Here is some information I think may be helpful:
$ id //at the host
uid=1000(qwang) gid=1000(qwang) groups=1000(qwang),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ id //in the container 'node'
uid=0(root) gid=0(root) groups=0(root)
$ id node //in the container 'node'
uid=1000(node) gid=1000(node) groups=1000(node)
$ ls -al //pwd => /home/node
drwxr-xr-x. 3 node node 69 Jul 19 13:51 .
drwxr-xr-x. 3 root root 18 Jul 8 04:16 ..
-rw-r--r--. 1 node node 220 Nov 5 2016 .bash_logout
-rw-r--r--. 1 node node 3515 Nov 5 2016 .bashrc
-rw-r--r--. 1 node node 675 Nov 5 2016 .profile
drwxrwxr-x. 2 node node 4096 Jul 19 13:50 app

Resources