convert spring boot tomcat azure k8s deployment to standalone application - azure

I have created an azure devops project for java , spring boot and kubernetes as a way to learn about the azure technology set. It does work , the simple spring boot web application is deployed and runs and is rebuilt if I make code changes.
However the spring boot application uses a very old version of spring 1.5.7.RELEASE and it is deployed in a tomcat server in k8s.
I am looking for some guidance on how to run it as a standalone spring boot version 2 application in kubernetes. My attempts so far have resulted in the deployment timing out after 15 minutes in the Helm Upgrade step.
The existing docker file
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM tomcat:8
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY --from=build-env /app/target/*.war /usr/local/tomcat/webapps/ROOT.war
How to change the dockerfile to build the image of a standalone spring boot app?
I changed the pom to generate a jar file, then modified the docker file to this:
FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY --from=build-env /app/target/ROOT.jar .
RUN ls -la
ENTRYPOINT ["java","-jar","ROOT.jar"]
This builds, see output from the log for 'Build an image' step
...
2019-06-25T23:33:38.0841365Z Step 9/20 : COPY --from=build-env /app/target/ROOT.jar .
2019-06-25T23:33:41.4839851Z ---> b478fb8867e6
2019-06-25T23:33:41.4841124Z Step 10/20 : RUN ls -la
2019-06-25T23:33:41.6653383Z ---> Running in 4618c503ac5c
2019-06-25T23:33:42.2022890Z total 50156
2019-06-25T23:33:42.2026590Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 .
2019-06-25T23:33:42.2026975Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 ..
2019-06-25T23:33:42.2027267Z -rwxr-xr-x 1 root root 0 Jun 25 23:33 .dockerenv
2019-06-25T23:33:42.2027608Z -rw-r--r-- 1 root root 51290350 Jun 25 23:33 ROOT.jar
2019-06-25T23:33:42.2027889Z drwxr-xr-x 2 root root 4096 May 9 20:49 bin
2019-06-25T23:33:42.2028188Z drwxr-xr-x 5 root root 340 Jun 25 23:33 dev
2019-06-25T23:33:42.2028467Z drwxr-xr-x 1 root root 4096 Jun 25 23:33 etc
2019-06-25T23:33:42.2028765Z drwxr-xr-x 2 root root 4096 May 9 20:49 home
2019-06-25T23:33:42.2029376Z drwxr-xr-x 1 root root 4096 May 11 01:32 lib
2019-06-25T23:33:42.2029682Z drwxr-xr-x 5 root root 4096 May 9 20:49 media
2019-06-25T23:33:42.2029961Z drwxr-xr-x 2 root root 4096 May 9 20:49 mnt
2019-06-25T23:33:42.2030257Z drwxr-xr-x 2 root root 4096 May 9 20:49 opt
2019-06-25T23:33:42.2030537Z dr-xr-xr-x 135 root root 0 Jun 25 23:33 proc
2019-06-25T23:33:42.2030937Z drwx------ 2 root root 4096 May 9 20:49 root
2019-06-25T23:33:42.2031214Z drwxr-xr-x 2 root root 4096 May 9 20:49 run
2019-06-25T23:33:42.2031523Z drwxr-xr-x 2 root root 4096 May 9 20:49 sbin
2019-06-25T23:33:42.2031797Z drwxr-xr-x 2 root root 4096 May 9 20:49 srv
2019-06-25T23:33:42.2032254Z dr-xr-xr-x 12 root root 0 Jun 25 23:33 sys
2019-06-25T23:33:42.2032355Z drwxrwxrwt 2 root root 4096 May 9 20:49 tmp
2019-06-25T23:33:42.2032656Z drwxr-xr-x 1 root root 4096 May 11 01:32 usr
2019-06-25T23:33:42.2032945Z drwxr-xr-x 1 root root 4096 May 9 20:49 var
2019-06-25T23:33:43.0909881Z Removing intermediate container 4618c503ac5c
2019-06-25T23:33:43.0911258Z ---> 0d824ce4ae62
2019-06-25T23:33:43.0911852Z Step 11/20 : ENTRYPOINT ["java","-jar","ROOT.jar"]
2019-06-25T23:33:43.2880002Z ---> Running in bba9345678be
...
The build completes but deployment fails in the Helm Upgrade step, timing out after 15 minutes. This is the log
2019-06-25T23:38:06.6438602Z ##[section]Starting: Helm upgrade
2019-06-25T23:38:06.6444317Z ==============================================================================
2019-06-25T23:38:06.6444448Z Task : Package and deploy Helm charts
2019-06-25T23:38:06.6444571Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running helm commands
2019-06-25T23:38:06.6444648Z Version : 0.153.0
2019-06-25T23:38:06.6444927Z Author : Microsoft Corporation
2019-06-25T23:38:06.6445006Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/helm-deploy
2019-06-25T23:38:06.6445300Z ==============================================================================
2019-06-25T23:38:09.1285973Z [command]/opt/hostedtoolcache/helm/2.14.1/x64/linux-amd64/helm upgrade --tiller-namespace dev2134 --namespace dev2134 --install --force --wait --set image.repository=stephenacr.azurecr.io/stephene991 --set image.tag=20 --set applicationInsights.InstrumentationKey=643a47f5-58bd-4012-afea-b3c943bc33ce --set imagePullSecrets={stephendockerauth} --timeout 900 azuredevops /home/vsts/work/r1/a/Drop/drop/sampleapp-v0.2.0.tgz
2019-06-25T23:53:13.7882713Z UPGRADE FAILED
2019-06-25T23:53:13.7883396Z Error: timed out waiting for the condition
2019-06-25T23:53:13.7885043Z Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7967270Z ##[error]Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7976964Z ##[section]Finishing: Helm upgrade

I have had another look at this as I now am more familiar with all the technologies, and I have located the problem.
The helm upgrade statement is timing out waiting for the newly deployed pod to become live but this doesn’t happen because the k8s liveness probe defined for the pod is not working. This can be seen with this command :
kubectl get po -n dev5998 -w
NAME READY STATUS RESTARTS AGE
sampleapp-86869d4d54-nzd9f 0/1 CrashLoopBackOff 17 48m
sampleapp-c8f84c857-phrrt 1/1 Running 0 1h
sampleapp-c8f84c857-rmq8w 1/1 Running 0 1h
tiller-deploy-79f84d5f-4r86q 1/1 Running 0 2h
The new pod is repeatedly restarted then killed. It seems to repeat forever or until another deployment is run.
In the log for the pod
kubectl describe po sampleapp-86869d4d54-nzd9f -n dev5998
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39m default-scheduler Successfully assigned sampleapp-86869d4d54-nzd9f to aks-agentpool-24470557-1
Normal SuccessfulMountVolume 39m kubelet, aks-agentpool-24470557-1 MountVolume.SetUp succeeded for volume "default-token-v72n5"
Normal Pulling 39m kubelet, aks-agentpool-24470557-1 pulling image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Pulled 39m kubelet, aks-agentpool-24470557-1 Successfully pulled image "devopssampleacreg.azurecr.io/devopssamplec538:52"
Normal Created 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Created container
Normal Started 37m (x3 over 39m) kubelet, aks-agentpool-24470557-1 Started container
Normal Killing 37m (x2 over 38m) kubelet, aks-agentpool-24470557-1 Killing container with id docker://sampleapp:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 36m (x6 over 38m) kubelet, aks-agentpool-24470557-1 Liveness probe failed: HTTP probe failed with statuscode: 404
Warning Unhealthy 34m (x12 over 38m) kubelet, aks-agentpool-24470557-1 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 9m25s (x12 over 38m) kubelet, aks-agentpool-24470557-1 Container image "devopssampleacreg.azurecr.io/devopssamplec538:52" already present on machine
Warning BackOff 4m10s (x112 over 34m) kubelet, aks-agentpool-24470557-1 Back-off restarting failed container
So there must be a difference in what urls are delivered by the application depending on how it is deployed, tomcat or standalone. Which now seems obvious.

Related

chown not working when coping a file in a dockerfile

I'm running docker engine on windows and am trying to add my own file to the image. Problem is that when I copy the file its ownership is always root:root but it needs to be heartbeat:heartbeat (exisitng user on image). Mounting a single file with the -v parameter und docker run doesn't seam to be possible on windows atm. Thats why I tried to create my own image with a docker file:
FROM docker.elastic.co/beats/heartbeat:7.16.3
USER root
COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
The --chown parameter behind the coping does nothing. It is still root when I check and the RUN chown command results in a error. Here the output:
docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 10.75kB
Step 1/4 : FROM docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (windows/amd64) and no specific platform was requested
---> Running in 2ae3bfdd5422
The command '/bin/sh -c chown -R heartbeat:heartbeat /usr/share/heartbeat' returned a non-zero code: 4294967295: failed to shutdown container: container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110): subsequent terminate failed container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110)
All help is welcome...
Running with --platform:
PS C:\SynteticMonitoring> docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 9.728kB
Step 1/4 : FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chmod +r /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> e9a075d2ab53
Successfully built e9a075d2ab53
Successfully tagged custom/heartbeat:7.16.3
PS C:\SynteticMonitoring> docker run --interactive --tty --entrypoint /bin/sh custom/heartbeat:7.16.3
sh-4.2# ls -l
total 106916
-rw-r--r-- 1 root root 13675 Jan 7 00:47 LICENSE.txt
-rw-r--r-- 1 root root 1964303 Jan 7 00:47 NOTICE.txt
-rw-r--r-- 1 root root 851 Jan 7 00:47 README.md
drwxrwxr-x 2 root root 4096 Jan 7 00:48 data
-rw-r--r-- 1 root root 374197 Jan 7 00:47 fields.yml
-rwxr-xr-x 1 root root 107027952 Jan 7 00:47 heartbeat
-rw-r--r-- 1 root root 69196 Jan 7 00:47 heartbeat.reference.yml
-rw-rw-rw- 1 root root 1631 Jan 26 06:49 heartbeat.yml
drwxr-xr-x 2 root root 4096 Jan 7 00:47 kibana
drwxrwxr-x 2 root root 4096 Jan 7 00:48 logs
drwxr-xr-x 2 root root 4096 Jan 7 00:47 monitors.d
sh-4.2# pwd
/usr/share/heartbeat
You can't chown of a file to a user that does not exist. It seems that the heartbeat user and group do not exist in your base image.
That's why the COPY --chown does nothing and you get files owned by root.
You can fix this by creating the user before COPYing. To do this, add a line before your COPY statement, such as:
RUN addgroup heartbeat && adduser -S -H heartbeat -G heartbeat
If you don't have addgroup and adduser in your base image, try alternative:
RUN useradd -rUM -s /usr/sbin/nologin heartbeat
This will create the group and user heartbeat and then chown will be able to successfully change the ownership.
According to Dockerfile documentation:
The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64, linux/arm64, or windows/amd64. By default, the target platform of the build request is used.
I suggest try something like:
FROM [--platform=<platform>] <image> [AS <name>]
FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3

Execing docker image entrypoint, which is a compiled go app, fails with "not found"

I have built a small Go app and done local testing of it on my Linux VM.
I'm now trying to build a prototype Docker image for it and test running the image. The Dockerfile structure is pretty simple. I base it on Alpine, copy the executable to the root directory and my entrypoint is running the executable.
It fails with "not found".
Now for more details.
Here is the Dockerfile, with some information elided:
FROM <registry>/<namespace>/alpine-base:3.12.3
COPY target/dist/linux-amd64/<appname> /
EXPOSE 8080
RUN echo hello
RUN ls -ltd .
RUN ls -lt
RUN whoami
#ENTRYPOINT ["./<appname>"]
ENTRYPOINT ./<appname>
This is approximately what I do when I build the image:
chmod 777 target/dist/linux-amd64/<appname>
docker build --no-cache -f Dockerfile -t <registry>/<namespace>/<appname>:dev-latest .
This is the output of that:
Sending build context to Docker daemon 14.48MB
Step 1/8 : FROM <registry>/<namespace>/alpine-base:3.12.3
---> d7eec24f3d29
Step 2/8 : COPY target/dist/linux-amd64/<appname> /
---> e056bbe44bd6
Step 3/8 : EXPOSE 8080
---> Running in 921cc1fe8804
Removing intermediate container 921cc1fe8804
---> 00b30c5a2770
Step 4/8 : RUN echo hello
---> Running in 9fb08d924d3c
hello
Removing intermediate container 9fb08d924d3c
---> 6788feafae4b
Step 5/8 : RUN ls -ltd .
---> Running in 78e6d4aea09f
drwxr-xr-x 1 root root 4096 Jan 10 23:02 .
Removing intermediate container 78e6d4aea09f
---> 711f3d247efe
Step 6/8 : RUN ls -lt
---> Running in 32e703a9d480
total 14200
drwxr-xr-x 5 root root 340 Jan 10 23:02 dev
drwxr-xr-x 1 root root 4096 Jan 10 23:02 etc
dr-xr-xr-x 324 root root 0 Jan 10 23:02 proc
dr-xr-xr-x 13 root root 0 Jan 10 23:02 sys
-rwxrwxrwx 1 root root 14480384 Jan 10 22:39 <appname>
drwxr-xr-x 1 root root 4096 Jan 12 2021 home
drwxr-xr-x 1 root root 4096 Jan 12 2021 opt
drwxr-xr-x 2 root root 4096 Dec 16 2020 bin
drwxr-xr-x 2 root root 4096 Dec 16 2020 sbin
drwxr-xr-x 1 root root 4096 Dec 16 2020 lib
drwxr-xr-x 5 root root 4096 Dec 16 2020 media
drwxr-xr-x 2 root root 4096 Dec 16 2020 mnt
drwx------ 2 root root 4096 Dec 16 2020 root
drwxr-xr-x 2 root root 4096 Dec 16 2020 run
drwxr-xr-x 2 root root 4096 Dec 16 2020 srv
drwxrwxrwt 2 root root 4096 Dec 16 2020 tmp
drwxr-xr-x 1 root root 4096 Dec 16 2020 usr
drwxr-xr-x 1 root root 4096 Dec 16 2020 var
Removing intermediate container 32e703a9d480
---> 68871e80b517
Step 7/8 : RUN whoami
---> Running in 40b2460bc349
kube
Removing intermediate container 40b2460bc349
---> 4cf57c0b5f10
Step 8/8 : ENTRYPOINT ./<appname>
---> Running in 3c57717800ab
Removing intermediate container 3c57717800ab
---> eaafc953da46
Successfully built eaafc953da46
Successfully tagged <registry>/<namespace>/<appname>:dev-latest
And this is what I run to test it:
docker rm <appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
docker logs <appname>-1
And this is the output:
docker rm <appname>-1
<appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
66bb4756783b3ef64d9a4b0d8b7227184ba3b5a3fde25ea0d19b9523285d76b7
docker logs <appname>-1
/bin/sh: ./<appname>: not found
It says "not found". I don't understand that. I showed the contents of the root directory. The file is clearly there. Is this error saying that some OTHER file is not found, like if it thought it was a shell script and the shebang pointed to a shell that doesn't exist?
Update:
So the one tiny little detail that I realized I didn't mention in the original post is that disabling CGO is not going to be possible. The entire reason for this app is to link with a C library and call functions in it, so I have to use Cgo.
What I conclude from these helpful comments and other threads like Go-compiled binary won't run in an alpine docker container on Ubuntu host , is that my "workaround" of changing to an ubuntu base image is actually the only reasonable solution.
If disabling cgo is not an option you can pass "-static" parameter to the linker.
Example:
package main
/*
#include <stdio.h>
void test_puts() {
puts("puts() called");
}
*/
import "C"
func main() {
C.test_puts()
}
Run:
go build --ldflags '-extldflags "-static"'

docker can't stat directory on external device

Briefly
I'm looking to build docker image from a dockerfile in a directory on an external device.
Context
I have an empty directory /media/nathan/ext/test except for Dockerfile
Dockerfile is : FROM alpine
docker version is : Docker version 20.10.8, build 3967b7d28e
OS is Ubuntu 21.10
I am part of the docker group
mount options :
$> findmnt /media/nathan/ext
TARGET SOURCE FSTYPE OPTIONS
/media/nathan/ext /dev/sda1 ext4 rw,nosuid,nodev,relatime
docker deamon
$> ps aux | grep dockerd
root 919 0.0 0.5 2166356 85600 ? Ssl 09:03 0:08 dockerd --group docker --exec-root=/run/snap.docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/run/snap.docker/docker.pid --config-file=/var/snap/docker/1125/config/daemon.json
nathan 19756 0.0 0.0 11844 2448 pts/0 S+ 11:44 0:00 grep --color=auto dockerd
$DOCKER_HOST is undefined
$> echo $DOCKER_HOST
docker info
$> docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 263
Server Version: 20.10.8
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e25210fe30a0a703442421b0f60afac609f950a3
runc version:
init version: de40ad0
Security Options:
Expected result
I get a docker image
True result
$> docker build .
error checking context: 'can't stat '/media/nathan/ext/test''.
What I have tried
Just sudo everything
$> sudo docker build .
error checking context: 'can't stat '/media/nathan/ext/test''.
Issue is not resolved
Am I the owner of the context folder ?
$> echo $USER
nathan
$> ls -la
total 12
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:33 .
drwxr-xr-x 8 nathan root 4096 nov. 12 09:39 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:32 Dockerfile
As per command above, I am the owner of the context directory. Am I missing something ?
add everything to .dockerignore
I've created a .dockerignore that matches everything : '*'.
Running the command [sudo] docker build . gives a very baffling answer:
$> sudo docker build .
open /media/nathan/ext/test/.dockerignore: permission denied
I do not understand how sudo doesn't have the necessary permissions to read (?) the .dockerfile. Permission which I have set to 777 out of astonishement :
ls -la
total 16
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:41 .
drwxr-xr-x 8 nathan root 4096 nov. 12 09:39 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:32 Dockerfile
-rwxrwxrwx 1 nathan nathan 2 nov. 12 10:41 .dockerignore
of course, other programms were capable of reading the file without any issue as expected
$> cat .dockerignore
*
Build outside of external drive
$> pwd
/home/nathan/Bureau/test
$> ls -la
total 12
drwxrwxr-x 2 nathan nathan 4096 nov. 12 10:58 .
drwxr-xr-x 3 nathan nathan 4096 nov. 12 10:56 ..
-rw-rw-r-- 1 nathan nathan 12 nov. 12 10:58 Dockerfile
$> docker build .
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM alpine
---> 14119a10abf4
Successfully built 14119a10abf4
Image is built, but I which to replicate result into external drive.
running docker build . with journalctl
[...]
nov. 12 11:42:52 nathan-pc systemd[1746]: Started snap.docker.docker.ba3da9ef-34ee-4a63-8ff4-6a56327c5cd2.scope.
nov. 12 11:42:52 nathan-pc audit[19690]: AVC apparmor="DENIED" operation="open" profile="snap.docker.docker" name="/media/nathan/ext/workspace/dino/ntrip-client/RTKLIB/" pid=19690 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
nov. 12 11:42:52 nathan-pc kernel: audit: type=1400 audit(1636713772.367:93): apparmor="DENIED" operation="open" profile="snap.docker.docker" name="/media/nathan/ext/workspace/dino/ntrip-client/RTKLIB/" pid=19690 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
nov. 12 11:42:52 nathan-pc systemd[1746]: snap.docker.docker.ba3da9ef-34ee-4a63-8ff4-6a56327c5cd2.scope: Deactivated successfully.
[...]
Thank you for your time

Permission for singularity

I am got an issue when running the whole pipeline of ChIP-seq using profile singularity on my local PC (window but subsystem Linux)
Error executing process > 'output_documentation'
Caused by:
Failed to pull singularity image
command: singularity pull --name nfcore-chipseq-1.2.2.img.pulling.1630098407814 docker://nfcore/chipseq:1.2.2 > /dev/null
status : 255
message:
INFO: Using cached SIF image
FATAL: While making image from oci registry: error copying image out of cache: could not open temporary file for copy: failed to change permission of ./tmp-copy-2575820807: chmod ./tmp-copy-2575820807: operation not permitted
I'm using singularity 3.8.2
I also have specified NXF_SINGULARITY_CACHEDIR to a hard drive instead of /home/.singularity
I also checked the folder to make sure all the file can be accessed
total 0
drwxrwxrwx 1 root root 4096 Aug 28 05:06 .
drwxrwxrwx 1 root root 4096 Aug 28 04:47 ..
-rwxrwxrwx 1 root root 0 Aug 28 04:53 tmp-copy-2299332276
-rwxrwxrwx 1 root root 0 Aug 28 05:06 tmp-copy-2575820807

Docker volumes on CentOS 7

I have run into a problem on CentOS 7 when attempting to map a volume to the host in a tomcat container. This happens with the public tomcat images as well as an image I have created (based on centos instead of debian).
instantiating a container as follows will succeed:
docker run -it -d tomcat:8
instantiating a container as follows will succeed, but with errors in the log and logs are not written to the host:
docker run -it -d -v /usr/local/tomcat:/usr/local/tomcat tomcat:8
[wpackard#eagle2 tomcat]$ dkr run -it -d -v
/usr/local/tomcat:/usr/local/tomcat tomcat:8
34075701b1436f83a24212170b4d2113ae698df244c449203b1c9af9814485c9
[wpackard#eagle2 tomcat]$ dkr ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34075701b143 tomcat:8 "catalina.sh run" 5 seconds ago Up 4 seconds 8080/tcp sharp_einstein
[wpackard#eagle2 tomcat]$ dkr logs sharp_einstein
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
java.util.logging.ErrorManager: 4
java.io.FileNotFoundException: /usr/local/tomcat/logs/catalina.2015-03-31.log (Permission denied)
...
31-Mar-2015 15:32:04.088 SEVERE [Catalina-startStop-1] org.apache.catalina.startup.HostConfig.start Unable to create directory for deployment: /usr/local/tomcat/conf/Catalina/localhost
31-Mar-2015 15:32:04.097 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/ROOT
31-Mar-2015 15:32:04.468 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/ROOT] for context []
31-Mar-2015 15:32:05.966 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/ROOT is unusable.
31-Mar-2015 15:32:06.042 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat/webapps/ROOT has finished in 1,929 ms
31-Mar-2015 15:32:06.043 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/docs
31-Mar-2015 15:32:06.093 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/docs] for context [/docs]
31-Mar-2015 15:32:06.216 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/docs is unusable.
31-Mar-2015 15:32:06.219 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat/webapps/docs has finished in 176 ms
31-Mar-2015 15:32:06.220 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat/webapps/examples
31-Mar-2015 15:32:06.272 WARNING [localhost-startStop-1] org.apache.catalina.core.StandardContext.postWorkDirectory Failed to create work directory [/usr/local/tomcat/work/Catalina/localhost/examples] for context [/examples]
31-Mar-2015 15:32:07.952 SEVERE [localhost-startStop-1] org.apache.jasper.EmbeddedServletOptions.<init> The scratchDir you specified: /usr/local/tomcat/work/Catalina/localhost/examples is unusable.
[wpackard#eagle2 tomcat]$
Exec'ing to the container and attempting to write also fails.
[wpackard#eagle2 tomcat]$ dkr ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34075701b143 tomcat:8 "catalina.sh run" 5 minutes ago Up 5 minutes 8080/tcp sharp_einstein
[wpackard#eagle2 tomcat]$ dkr exec -it sharp_einstein /bin/bash
root#34075701b143:/usr/local/tomcat# ls -l
total 96
-rw-rw-r--. 1 root root 56977 Jan 23 11:59 LICENSE
-rw-rw-r--. 1 root root 1397 Jan 23 11:59 NOTICE
-rw-rw-r--. 1 root root 6779 Jan 23 11:59 RELEASE-NOTES
-rw-rw-r--. 1 root root 16204 Jan 23 11:59 RUNNING.txt
drwxrwxr-x. 2 root root 4096 Mar 31 12:14 bin
drwxrwxr-x. 2 root root 4096 Jan 23 11:59 conf
drwxrwxr-x. 2 root root 4096 Mar 31 12:14 lib
drwxrwxr-x. 2 root root 6 Jan 23 11:56 logs
drwxrwxr-x. 2 root root 29 Mar 31 12:14 temp
drwxrwxr-x. 7 root root 76 Jan 23 11:57 webapps
drwxrwxr-x. 2 root root 6 Jan 23 11:56 work
root#34075701b143:/usr/local/tomcat# cd logs
root#34075701b143:/usr/local/tomcat/logs# echo "test" > test.log
bash: test.log: Permission denied
I have created an instance of the postgresql container on centos and that successfully maps and uses the volume, verified by creating a db, stopping the instance and then re-running the container.
[wpackard#eagle2 ~]$ uname --all
Linux eagle2 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[wpackard#eagle2 ~]$
dkr is an alias to docker, I have created a docker group and added myself to the group to eliminate the need for sudo.
The volume mapping seems to work correctly on ubuntu. On CentOS I have tried both the package version (as below), and also updating it to 1.5.
[wpackard#eagle2 ~]$ dkr --version
Docker version 1.3.2, build 39fa2fa/1.3.2
[wpackard#eagle2 ~]$
How do I make volumes work on CentOS?
I think your volumes are working :-) You have a permission problem. I run into this fairly often with the mapping of user id between the host and the container. On your host, if you look at /usr/local/tomcat (ls -ld), you will see a owner, group and the permissions. You probably have something like 0755 (read/write/exec by owner, read/exec by group, read/exec by world. You can test this theory easily, simple remember the current settings for /usr/local/tomcat/logs, then do:
chmod 777 /usr/local/tomcat/logs
from the docker host (not the container). Then run your test on the container, the Permission denied should evaporate.
This is NOT a good fix, though. I don't know what the community says about user id mapping for docker. One thing you could do is figure out the user and group in your host for that directory. Then, when you create your image (or at run time) create a user with the same id and a group with the same id in the container. Then run your tomcat service using that user in the container.
This is due to SELinux.
You must attach correct type to host directory:
host$ chcon -Rt svirt_sandbox_file_t /usr/local/tomcat

Resources