skaffold with kaniko: registry address not resolved - skaffold

I've deployed a registry service into a namespace registry:
$ helm install registry stable/docker-registry
Service:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.119.11 <none> 5000/TCP 18h
this is my skaffold.yaml:
apiVersion: skaffold/v2beta1
kind: Config
metadata:
name: spring-boot-slab
build:
artifacts:
- image: skaffold-covid-backend
kaniko:
dockerfile: Dockerfile-multistage
image: gcr.io/kaniko-project/executor:debug
cache: {}
cluster: {}
deploy:
kubectl:
manifests:
- k8s/*
Everything works fine, up to when kaniko is trying to push the image to above registry:
Get "http://registry-docker-registry.registry.svc.cluster.local:5000/v2/": dial tcp: lookup registry-docker-registry.registry.svc.cluster.local on 127.0.0.53:53: no such host
Skaffold command is:
$ skaffold build --default-repo=registry-docker-registry.registry.svc.cluster.local:5000
This is the log:
$ skaffold build --default-repo=registry-docker-registry.registry.svc.cluster.local:5000
INFO[0000] Skaffold &{Version:v1.7.0 ConfigVersion:skaffold/v2beta1 GitVersion: GitCommit:145f59579470eb1f0a7f40d8e0924f8716c6f05b GitTreeState:clean BuildDate:2020-04-02T21:49:58Z GoVersion:go1.14 Compiler:gc Platform:linux/amd64}
DEBU[0000] validating yamltags of struct SkaffoldConfig
DEBU[0000] validating yamltags of struct Metadata
DEBU[0000] validating yamltags of struct Pipeline
DEBU[0000] validating yamltags of struct BuildConfig
DEBU[0000] validating yamltags of struct Artifact
DEBU[0000] validating yamltags of struct ArtifactType
DEBU[0000] validating yamltags of struct KanikoArtifact
DEBU[0000] validating yamltags of struct KanikoCache
DEBU[0000] validating yamltags of struct TagPolicy
DEBU[0000] validating yamltags of struct GitTagger
DEBU[0000] validating yamltags of struct BuildType
DEBU[0000] validating yamltags of struct ClusterDetails
DEBU[0000] validating yamltags of struct DeployConfig
DEBU[0000] validating yamltags of struct DeployType
DEBU[0000] validating yamltags of struct KubectlDeploy
DEBU[0000] validating yamltags of struct KubectlFlags
INFO[0000] Using kubectl context: k3s-traefik-v2
DEBU[0000] Using builder: cluster
DEBU[0000] setting Docker user agent to skaffold-v1.7.0
Generating tags...
- skaffold-covid-backend -> DEBU[0000] Running command: [git describe --tags --always]
DEBU[0000] Command output: [c5dfd81
]
DEBU[0000] Running command: [git status . --porcelain]
DEBU[0000] Command output: [ M Dockerfile-multistage
M skaffold.yaml
?? k8s/configmap.yaml
?? kaniko-pod.yaml
?? run_in_docker.sh
]
registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend:c5dfd81-dirty
INFO[0000] Tags generated in 3.479451ms
Checking cache...
DEBU[0000] Found dependencies for dockerfile: [{pom.xml /tmp true} {src /tmp/src true}]
- skaffold-covid-backend: Not found. Building
INFO[0000] Cache check complete in 3.995675ms
Building [skaffold-covid-backend]...
DEBU[0000] getting client config for kubeContext: ``
INFO[0000] Waiting for kaniko-rjsn5 to be initialized
DEBU[0001] Running command: [kubectl --context k3s-traefik-v2 exec -i kaniko-rjsn5 -c kaniko-init-container -n registry -- tar -xf - -C /kaniko/buildcontext]
DEBU[0001] Found dependencies for dockerfile: [{pom.xml /tmp true} {src /tmp/src true}]
DEBU[0001] Running command: [kubectl --context k3s-traefik-v2 exec kaniko-rjsn5 -c kaniko-init-container -n registry -- touch /tmp/complete]
INFO[0001] Waiting for kaniko-rjsn5 to be complete
DEBU[0001] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-rjsn5" is waiting to start: PodInitializing
DEBU[0002] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-rjsn5" is waiting to start: PodInitializing
DEBU[0000] Getting source context from dir:///kaniko/buildcontext
DEBU[0000] Build context located at /kaniko/buildcontext
DEBU[0000] Copying file /kaniko/buildcontext/Dockerfile-multistage to /kaniko/Dockerfile
DEBU[0000] Skip resolving path /kaniko/Dockerfile
DEBU[0000] Skip resolving path /kaniko/buildcontext
DEBU[0000] Skip resolving path /cache
DEBU[0000] Skip resolving path
DEBU[0000] Skip resolving path
DEBU[0000] Skip resolving path
INFO[0000] Resolved base name maven:3-jdk-8-slim to maven:3-jdk-8-slim
INFO[0000] Resolved base name java:8-jre-alpine to java:8-jre-alpine
INFO[0000] Resolved base name maven:3-jdk-8-slim to maven:3-jdk-8-slim
INFO[0000] Resolved base name java:8-jre-alpine to java:8-jre-alpine
INFO[0000] Retrieving image manifest maven:3-jdk-8-slim
DEBU[0003] No file found for cache key sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f stat /cache/sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f: no such file or directory
DEBU[0003] Image maven:3-jdk-8-slim not found in cache
INFO[0003] Retrieving image manifest maven:3-jdk-8-slim
INFO[0005] Retrieving image manifest java:8-jre-alpine
DEBU[0007] No file found for cache key sha256:6a8cbe4335d1a5711a52912b684e30d6dbfab681a6733440ff7241b05a5deefd stat /cache/sha256:6a8cbe4335d1a5711a52912b684e30d6dbfab681a6733440ff7241b05a5deefd: no such file or directory
DEBU[0007] Image java:8-jre-alpine not found in cache
INFO[0007] Retrieving image manifest java:8-jre-alpine
DEBU[0009] Resolved /tmp/target/*.jar to /tmp/target/*.jar
DEBU[0009] Resolved /app/spring-boot-application.jar to /app/spring-boot-application.jar
INFO[0009] Built cross stage deps: map[0:[/tmp/target/*.jar]]
INFO[0009] Retrieving image manifest maven:3-jdk-8-slim
DEBU[0011] No file found for cache key sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f stat /cache/sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f: no such file or directory
DEBU[0011] Image maven:3-jdk-8-slim not found in cache
INFO[0011] Retrieving image manifest maven:3-jdk-8-slim
DEBU[0012] Resolved pom.xml to pom.xml
DEBU[0012] Resolved /tmp/ to /tmp/
DEBU[0012] Getting files and contents at root /kaniko/buildcontext for /kaniko/buildcontext/pom.xml
DEBU[0012] Using files from context: [/kaniko/buildcontext/pom.xml]
DEBU[0012] optimize: composite key for command COPY pom.xml /tmp/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0]}
DEBU[0012] optimize: cache key for command COPY pom.xml /tmp/ fc6a0ec8876277261e83ab9b647595b1df258352ba9acf92ec19c761415fb23e
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:fc6a0ec8876277261e83ab9b647595b1df258352ba9acf92ec19c761415fb23e...
INFO[0012] Using caching version of cmd: COPY pom.xml /tmp/
DEBU[0012] optimize: composite key for command RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml]}
DEBU[0012] optimize: cache key for command RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml 18ffc2eda5a9ef5481cc865da06e9a4e3d543bf9befb35bd7ac3cb9dc3b62fc7
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:18ffc2eda5a9ef5481cc865da06e9a4e3d543bf9befb35bd7ac3cb9dc3b62fc7...
INFO[0012] Using caching version of cmd: RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml
DEBU[0012] Resolved src to src
DEBU[0012] Resolved /tmp/src/ to /tmp/src/
DEBU[0012] Using files from context: [/kaniko/buildcontext/src]
DEBU[0012] optimize: composite key for command COPY src /tmp/src/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5]}
DEBU[0012] optimize: cache key for command COPY src /tmp/src/ 177d8852ce5ec30e7ac1944b43363857d249c3fb4cdb4a26724ea88660102e52
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:177d8852ce5ec30e7ac1944b43363857d249c3fb4cdb4a26724ea88660102e52...
INFO[0012] Using caching version of cmd: COPY src /tmp/src/
DEBU[0012] optimize: composite key for command WORKDIR /tmp/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5 WORKDIR /tmp/]}
DEBU[0012] optimize: cache key for command WORKDIR /tmp/ cc93f6a4e941f6eb0b907172ea334a00cdd93ba12f07fe5c6b2cddd89f1ac16c
DEBU[0012] optimize: composite key for command RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5 WORKDIR /tmp/ RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package]}
DEBU[0012] optimize: cache key for command RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package f09ec8d47c0476fe4623fbb7bedd628466d43cd623c82a298c84d43c028c4518
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:f09ec8d47c0476fe4623fbb7bedd628466d43cd623c82a298c84d43c028c4518...
INFO[0012] Using caching version of cmd: RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package
DEBU[0012] Mounted directories: [{/kaniko false} {/etc/mtab false} {/tmp/apt-key-gpghome true} {/var/run false} {/proc false} {/dev false} {/dev/pts false} {/dev/mqueue false} {/sys false} {/sys/fs/cgroup false} {/sys/fs/cgroup/systemd false} {/sys/fs/cgroup/cpu,cpuacct false} {/sys/fs/cgroup/devices false} {/sys/fs/cgroup/net_cls,net_prio false} {/sys/fs/cgroup/pids false} {/sys/fs/cgroup/rdma false} {/sys/fs/cgroup/memory false} {/sys/fs/cgroup/freezer false} {/sys/fs/cgroup/cpuset false} {/sys/fs/cgroup/perf_event false} {/sys/fs/cgroup/blkio false} {/sys/fs/cgroup/hugetlb false} {/busybox false} {/kaniko/buildcontext false} {/etc/hosts false} {/dev/termination-log false} {/etc/hostname false} {/etc/resolv.conf false} {/dev/shm false} {/var/run/secrets/kubernetes.io/serviceaccount false} {/proc/asound false} {/proc/bus false} {/proc/fs false} {/proc/irq false} {/proc/sys false} {/proc/sysrq-trigger false} {/proc/acpi false} {/proc/kcore false} {/proc/keys false} {/proc/timer_list false} {/proc/sched_debug false} {/proc/scsi false} {/sys/firmware false}]
DEBU[0014] Not adding /dev because it is whitelisted
DEBU[0014] Not adding /etc/hostname because it is whitelisted
DEBU[0014] Not adding /etc/resolv.conf because it is whitelisted
DEBU[0018] Not adding /proc because it is whitelisted
DEBU[0019] Not adding /sys because it is whitelisted
DEBU[0026] Not adding /var/run because it is whitelisted
DEBU[0080] Whiting out /var/lib/apt/lists/.wh.auxfiles
DEBU[0080] not including whiteout files
INFO[0085] Taking snapshot of full filesystem...
INFO[0085] Resolving paths
FATA[0095] build failed: building [skaffold-covid-backend]: getting image: Get "http://registry-docker-registry.registry.svc.cluster.local:5000/v2/": dial tcp: lookup registry-docker-registry.registry.svc.cluster.local on 127.0.0.53:53: no such host
At the same time kaniko Pod is running I've been able to perform some actions:
$ kubectl exec -ti kaniko-8nph4 -c kaniko -- sh
/ # wget registry-docker-registry.registry.svc.cluster.local:5000/v2/_catalog
Connecting to registry-docker-registry.registry.svc.cluster.local:5000 (10.43.119.11:5000)
saving to '_catalog'
_catalog 100% |**************************************************************************************************************| 75 0:00:00 ETA
'_catalog' saved
/ # cat _catalog
{"repositories":["skaffold-covid-backend","skaffold-covid-backend/cache"]}
So it seems it's able to connect to it, but at logs are saying it's not able to connect to it.
Any ideas about how to get access to this registry deployed inside the same kubernetes?
I've tried to get access to the registry from another pod:
$ kubectl exec -ti graylog-1 -- curl registry-docker-registry.registry:5000/v2/_catalog
{"repositories":["skaffold-covid-backend","skaffold-covid-backend/cache"]}
As you can see, it's able to get access to the registry.
I've also took a look on container /etc/resolv.conf:
$ kubectl exec -ti kaniko-zqhgf -c kaniko -- cat /etc/resolv.conf
search registry.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
I've also checked connections during container is running:
$ kubectl exec -ti kaniko-sgs5x -c kaniko -- netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 210 kaniko-sgs5x:40104 104.18.124.25:443 ESTABLISHED
tcp 0 0 kaniko-sgs5x:46006 registry-docker-registry.registry.svc.cluster.local:5000 ESTABLISHED
tcp 0 0 kaniko-sgs5x:45884 registry-docker-registry.registry.svc.cluster.local:5000 ESTABLISHED
tcp 0 0 kaniko-sgs5x:39772 ec2-52-3-104-67.compute-1.amazonaws.com:443 ESTABLISHED
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
As you can see, it seems it's able that container has established connection to egistry-docker-registry.registry.svc.cluster.local:5000. However, when it tries to push it at registry, error log appears...
It's really strange.

If you look at the log numbers, they jump from 0020 to 0080. I suspect the lines from [0080,0085] are from your local Skaffold that is attempting to retrieve image details from the remote registry, which is inaccessible from your machine.
You might consider describing your situation to the following issue:
https://github.com/GoogleContainerTools/skaffold/issues/3841#issuecomment-603582206

Related

how to mount a disk partition in docker

I have the below sd card partition from sudo blkid
/dev/sdb1: PARTLABEL="uboot" PARTUUID="5e6c4af7-015f-46df-9426-d27fb38f1d87"
...
...
...
/dev/sdb8: UUID="5f38be2e-3d5d-4c42-8d66-8aa6edc3eede" BLOCK_SIZE="1024" TYPE="ext2" PARTLABEL="userdata" PARTUUID="dceeb110-7c3e-4973-b6ba-c60f8734c988"
/dev/sdb9: UUID="51e83a43-830f-48de-bcea-309a784ea35c" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="rootfs" PARTUUID="c58164a5-704a-4017-aeea-739a0941472f"
I am trying to mount /dev/sdb9 into a docker container so that I can reformat it and do other stuffs with it.
But I am not able to attach it as a volume in docker container.
This is what I've done:
docker volume create --driver=local --opt type=ext4 --opt device=/dev/disk/by-uuid/51e83a43-830f-48de-bcea-309a784ea35c my-vol
docker run <image id> -v my-vol:/my-vol -it bash
However, it came up with the error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
Any ideas how i can mount /dev/sdb9 into a docker container?
You need to change the order of your docker run command so that the options come before the image. Everything after the image is considered as args, you need to provide options such as volume before the image name. From the docker run docs https://docs.docker.com/engine/reference/commandline/container_run/:
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
$ docker run -it ubuntu -v $(pwd):/local
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-v": executable file not found in $PATH: unknown.
$ docker run -it -v $(pwd):/local ubuntu
root#8fa69b8861d8:/#

Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380... while using docker on Ubuntu WSL 2

I am following an already written guideline on running docker and Mariadb on VM. but I am using wsl ubuntu on windows.
(sudo) apt update
(sudo) apt upgrade
* Docker
(sudo) apt-get install docker.io
Portainer
(sudo) docker volume create portainer_data
(sudo) docker run --name portainer -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer --log-opt max-size=50m --log-opt max-file=5 -d --privileged -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /path/on/host/data:/data portainer/portainer
MariaDB
(sudo) mkdir /mnt/raid/data/mariadb
(sudo) mkdir /mnt/raid/data/mariadb/storage
(sudo) touch /mnt/raid/data/mariadb/config.cnf
(sudo) nano /mnt/raid/data/mariadb/config.cnf
but I get an error whenever I run docker run --log-opt max-size=50m --log-opt max-file=5 --name mariadb -v /mnt/raid/data/mariadb/storage:/var/lib/mysql -v /mnt/raid/data/mariadb/config.cnf:/etc/mysql/my.cnf -p 3306:3306 -e MYSQL_ROOT_PASSWORD=admin -d mariadb/server:latest
This is the error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:75: mounting "/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/e18d5bf9d7f9627840069cbdafadd22ec458ffe154082d3c685ed8b1a4f15eb2" to rootfs at "/etc/mysql/my.cnf" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I guess it might be an error with the mounting because I cannot locate the locations of the created path or the folder in the system.

Docker overlay2: error walking file system: OSError [Errno 40] Too many levels of symbolic links

Main app: uvicorn server on starlette (python) webapp
While I was trying to debug the error in the title (troubleshoot log is following below) running the below command at the host's FS (/var/lib/docker/overlay2/[IMAGE_HASH_FOLDER]
find -L ./ -mindepth 15
I find the files involved in the loop.
Locally is the /usr/bin/X11 and at server I'm getting the following:
error walking file system: OSError [Errno 40] Too many levels of symbolic links: '/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/stderr'
The owner of the conflicting files (FS is the one of the host), pending docker service restart after the pruning:
➜ overlay2 find -L ./ -mindepth 15
find: File system loop detected; ‘./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11’ is part of the same file system loop as ‘./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin’.
find: File system loop detected; ‘./6ec18b03535c1dac329e05b2abdc68fb0eea742a06878d90f84c4de73ea6a4a9/merged/usr/bin/X11’ is part of the same file system loop as ‘./6ec18b03535c1dac329e05b2abdc68fb0eea742a06878d90f84c4de73ea6a4a9/merged/usr/bin’.
find: File system loop detected; ‘./l/GCDLBXTJXAL5PFTI4BE3MM3OE2/usr/bin/X11’ is part of the same file system loop as ‘./l/GCDLBXTJXAL5PFTI4BE3MM3OE2/usr/bin’.
➜ overlay2 ls -l ./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11
lrwxrwxrwx 1 root root 1 May 3 2017 ./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11 -> .
The dockerfile:
FROM python:3.8
COPY src/ ./
RUN /usr/local/bin/python -m pip install --upgrade pip || true
RUN pip install -r requirements.txt || true
ARG POSTGRES_USER
ENV POSTGRES_USER=$POSTGRES_USER
ARG POSTGRES_PASSWORD
ENV POSTGRES_PASSWORD=$POSTGRES_PASSWORD
ARG POSTGRES_SERVER
ENV POSTGRES_SERVER=$POSTGRES_SERVER
ARG POSTGRES_DB
ENV POSTGRES_DB=$POSTGRES_DB
ARG POSTGRES_PORT
ENV POSTGRES_PORT=$POSTGRES_PORT
ARG SESSION_SECRET
ENV SESSION_SECRET=$SESSION_SECRET
ARG DO_YOU_WANT_USERS
ENV DO_YOU_WANT_USERS=$DO_YOU_WANT_USERS
ARG WHERE_AM_I
ENV WHERE_AM_I=$WHERE_AM_I
# SSL
ARG FORWARDED_ALLOW_IPS
ENV FORWARDED_ALLOW_IPS=$FORWARDED_ALLOW_IPS
ARG SSL_CERTIFICATE
ENV SSL_CERTIFICATE=$SSL_CERTIFICATE
ARG SSL_KEYFILE
ENV SSL_KEYFILE=$SSL_KEYFILE
ARG UPLOADS_PATH
ENV UPLOADS_PATH=$UPLOADS_PATH
RUN echo "FINAL STAGE - RUN APP"
EXPOSE 7000
CMD ["python", "run.py"]
Either I run the container with the volume I usually bind:
UPLOADS_PATH=/var/opt/tmp
LOCAL_UPLOADS_PATH=/var/containers/TEST_UPLOADS
docker build --build-arg POSTGRES_USER --build-arg POSTGRES_PASSWORD --build-arg POSTGRES_SERVER --build-arg POSTGRES_DB --build-arg POSTGRES_PORT --build-arg UPLOADS_PATH --build-arg WHERE_AM_I --build-arg SESSION_SECRET --build-arg DO_YOU_WANT_USERS -t test .
docker run -d --name test_container -v ${LOCAL_UPLOADS_PATH}:${UPLOADS_PATH} -p 7000:7000 test
or without the binding, I still get the same error logs & the app is constantly restarting after every request.
How is possible to have such a loop (linked files?) inside the image?
UPDATE
The container was running smoothly until I've changed bcrypt library with pybcrypt and uvicorn with its cythonized version.
Much appreciate any suggestions on what to further explore.
P.S. I've also tried the docker system prune -a, and although there were deprecated stuff, nothing changed.
P. S. 2: #jordanvrtanoski I've separated the question as you've suggested.
UPDATE #2
Following #jordanvrtanoski inspect command:
➜ docker image inspect -f $'{{.RepoTags}}\t{{.GraphDriver.Data.LowerDir}}' $(docker images -q)
[test:latest] /var/lib/docker/overlay2/99e3b5db623ae543d045cc86c2d7d36400c8d1780ec4b86c297f5055bbdfe81a/diff:/var/lib/docker/overlay2/4ed6de1627ba5957c8fa9834c797a60d277c76e61f138d1b6909c55ef5475523/diff:/var/lib/docker/overlay2/7f790257bc4e6ed9e6ea6ef5bed0eb0cf3af213ea913484a40946a45639d8188/diff:/var/lib/docker/overlay2/c8e04185bdc7714e116615a3599a9832ebe2080b43f09b68331cca5d7c109371/diff:/var/lib/docker/overlay2/9ef94affd46bbcc11d62999ab0c59d6bf28cc6d51f13a7513b93bb209738940a/diff:/var/lib/docker/overlay2/62438cdccba1f312f34e8458e4ec695019e6af65107b2e16c3d7eaa53ca03c06/diff:/var/lib/docker/overlay2/9ec57b8b2680944690cdceae73c1c49b31716bd5efbed78bd3d54810bffdc7b6/diff:/var/lib/docker/overlay2/b2c4ce8d2b6764476a452489f58e615fcce939eaecb3d65466f81f5f115a5b5d/diff:/var/lib/docker/overlay2/f8609908601489fb7e3e28a32c423ee556ec041c69ba274a02de316ccbef5c48/diff:/var/lib/docker/overlay2/dcd13187b642277de35f299c1abb1d7d9695972e8b8893267a62f65338679080/diff:/var/lib/docker/overlay2/e2ed1696e3a34e69ed493da3a2c10b942f09384b1cebac54afebea6fef9c4521/diff
[python:3.8] /var/lib/docker/overlay2/c8e04185bdc7714e116615a3599a9832ebe2080b43f09b68331cca5d7c109371/diff:/var/lib/docker/overlay2/9ef94affd46bbcc11d62999ab0c59d6bf28cc6d51f13a7513b93bb209738940a/diff:/var/lib/docker/overlay2/62438cdccba1f312f34e8458e4ec695019e6af65107b2e16c3d7eaa53ca03c06/diff:/var/lib/docker/overlay2/9ec57b8b2680944690cdceae73c1c49b31716bd5efbed78bd3d54810bffdc7b6/diff:/var/lib/docker/overlay2/b2c4ce8d2b6764476a452489f58e615fcce939eaecb3d65466f81f5f115a5b5d/diff:/var/lib/docker/overlay2/f8609908601489fb7e3e28a32c423ee556ec041c69ba274a02de316ccbef5c48/diff:/var/lib/docker/overlay2/dcd13187b642277de35f299c1abb1d7d9695972e8b8893267a62f65338679080/diff:/var/lib/docker/overlay2/e2ed1696e3a34e69ed493da3a2c10b942f09384b1cebac54afebea6fef9c4521/diff
UPDATE #3
So after following both #jordanvrtanoski advices & this post (#Janith Shanilka): Docker overlay2 eating Disk Space
I was missing the following file:
nano /etc/docker/daemon.json
and populated with:
{
"storage-driver": "aufs"
}
then sudo systemctl restart docker
Now the app doesn't crash, but I'm still getting at logs the same loop message:
error walking file system: OSError [Errno 40] Too many levels of symbolic links: '/usr/bin/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/cc'
also #jordanvrtanoski
➜ docker image inspect -f $'{{.RepoTags}}\t{{.GraphDriver.Data.LowerDir}}' $(docker images -q)
[test:latest] <no value>
[python:3.8] <no value>
I've also noticed that df is a little weird, it looks like the docker's volume is like a 'clone' of the basic host's filesystem?
➜ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 4046520 0 4046520 0% /dev
tmpfs 815676 3276 812400 1% /run
/dev/sda3 49014600 20123088 26798560 43% /
tmpfs 4078368 304 4078064 1% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 4078368 0 4078368 0% /sys/fs/cgroup
/dev/sda1 474730 148714 296986 34% /boot
tmpfs 815672 0 815672 0% /run/user/0
none 49014600 20123088 26798560 43% /var/lib/docker/aufs/mnt/0d98503bd3ea82e353f6776c2d813a642536ad6dd4300299a8fc22b5d6348bc8
UPDATE #4
So after #jordanvrtanoski 's suggestion I returned docker to overlay, from 'aufs'.
The below results are from the host:
➜ cd /var/lib/docker/overlay2
➜ find -L ./ -mindepth 15
find: File system loop detected; ‘./2ecf467259235c8c4605b058bff4f80100790ee7f5010d4954d6aab1a7f28686/merged/usr/bin/X11’ is part of the same file system loop as ‘./2ecf467259235c8c4605b058bff4f80100790ee7f5010d4954d6aab1a7f28686/merged/usr/bin’.
find: File system loop detected; ‘./6f39e8e2089c99f636da9a534e2ccbe7e41202eeb2ce645efa9387dd0ef0b908/diff/usr/bin/X11’ is part of the same file system loop as ‘./6f39e8e2089c99f636da9a534e2ccbe7e41202eeb2ce645efa9387dd0ef0b908/diff/usr/bin’.
find: File system loop detected; ‘./l/5AOADDMRCAKLG2FQDDJEYC6CY2/usr/bin/X11’ is part of the same file system loop as ‘./l/5AOADDMRCAKLG2FQDDJEYC6CY2/usr/bin’.
UPDATE #5
Found the cause: lib uvicorn[standard] is the cythonized version of itself. Once I removed it all errors were gone. So I'll move this to uvicorn's github.
#jordanvrtanoski Thank you once again for your help!
This problem is caused by an self-referencing symbolic linl in the pyhton:3.8 image.
~# docker run -ti --rm python:3.8 bash
root#ef6c6f4e18ff:/# ls -l /usr/bin/X11/X11
lrwxrwxrwx 1 root root 1 May 3 2017 /usr/bin/X11/X11 -> .
The fix the circular reference is caused by python:3.8 image you can simply delete the /usr/bin/X11/X11 symbolic link
root#ef6c6f4e18ff:/# rm /usr/bin/X11/X11
You can add this to your build file as follwos:
FROM python:3.8
COPY src/ ./
RUN rm /usr/bin/X11/X11
RUN /usr/local/bin/python -m pip install --upgrade pip || true
RUN pip install -r requirements.txt || true
ARG POSTGRES_USER
ENV POSTGRES_USER=$POSTGRES_USER
ARG POSTGRES_PASSWORD
ENV POSTGRES_PASSWORD=$POSTGRES_PASSWORD
ARG POSTGRES_SERVER
ENV POSTGRES_SERVER=$POSTGRES_SERVER
ARG POSTGRES_DB
ENV POSTGRES_DB=$POSTGRES_DB
ARG POSTGRES_PORT
ENV POSTGRES_PORT=$POSTGRES_PORT
ARG SESSION_SECRET
ENV SESSION_SECRET=$SESSION_SECRET
ARG DO_YOU_WANT_USERS
ENV DO_YOU_WANT_USERS=$DO_YOU_WANT_USERS
ARG WHERE_AM_I
ENV WHERE_AM_I=$WHERE_AM_I
# SSL
ARG FORWARDED_ALLOW_IPS
ENV FORWARDED_ALLOW_IPS=$FORWARDED_ALLOW_IPS
ARG SSL_CERTIFICATE
ENV SSL_CERTIFICATE=$SSL_CERTIFICATE
ARG SSL_KEYFILE
ENV SSL_KEYFILE=$SSL_KEYFILE
ARG UPLOADS_PATH
ENV UPLOADS_PATH=$UPLOADS_PATH
RUN echo "FINAL STAGE - RUN APP"
EXPOSE 7000
CMD ["python", "run.py"]
In case someone else stumbles across this later - it looks like uvicorn accesses all subfolders of the path you invoke it from. If you're not explicitly setting a working directory in your dockerfile/compose.yaml, this will be the file system root, which gets into all the bind mount infrastructure in proc that you probably don't care about for running an ASGI server.
WORKDIR /home in a dockerfile or working_dir: /home in a compose.yaml should generally be a fine workaround for most Docker use cases for this error, or to your app directory if you're volume mounting in code.

How do you deploy multiple docker containers to gcloud using Travis CI?

I am having trouble accessing my gcloud compute engine via Travis-CI so I can have CI/CD capabilities.
So far using my current code I am able to use my git repository to start up docker containers on Travis CI to see that they work.
I am then able to get them to build, tag, and deploy to the google cloud container registry with no issues.
However, when I get to the step where I want to ssh into my compute instance to pull and run my containers I run into issues.
I have tried using the gcloud compute ssh --command but I run into issues with gcloud not being installed on my instance. Error received:
If I try running a gcloud command it just says gcloud is missing.
bash: gcloud: command not found
The command "gcloud compute ssh --quiet --project charged-formula-262616 --zone us-west1-b instance-1 --command="gcloud auth configure-docker "" failed and exited with 127 during.
I have also tried downloading the gcloud sdk and running the docker config again but I start receiving the Error bellow.
bash
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Using default tag: latest
I am able to ssh into it using putty as another user to pull from the repository with no issues and start the containers and have the gcloud command exists.
The only thing I could think of is the two accounts used for ssh are different but both keys are added to the instance and I don't see where I can control their permissions. I also created a service account for travis ci and granted it all the same permissions as the compute service account and still no dice...
Any help or advice would be much appreciated!
My travis file looks like this
sudo: required
language: generic
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
cache:
directories:
- "$HOME/google-cloud-sdk/"
before_install:
- openssl aes-256-cbc -K $encrypted_0c35eebf403c_key -iv $encrypted_0c35eebf403c_iv
-in secrets.tar.enc -out secrets.tar -d
- tar xvf secrets.tar
- if [ ! -d "$HOME/google-cloud-sdk/bin" ]; then rm -rf $HOME/google-cloud-sdk; export
CLOUDSDK_CORE_DISABLE_PROMPTS=1; curl https://sdk.cloud.google.com | bash; fi
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud auth activate-service-account --key-file service-account.json
- gcloud components update
- gcloud components install docker-credential-gcr
- gcloud version
- eval $(ssh-agent -s)
- chmod 600 deploy_key_open
- echo -e "Host $SERVER_IP_ADDRESS\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
- ssh-add deploy_key_open
- gcloud auth configure-docker
# - sudo docker pull gcr.io/charged-formula-262616/web-client
# - sudo docker pull gcr.io/charged-formula-262616/web-nginx
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
and the bash script is
# docker build -t gcr.io/charged-formula-262616/web-client:latest -t gcr.io/charged-formula-262616/web-client:$SHA -f ./client/Dockerfile ./client
# docker build -t gcr.io/charged-formula-262616/web-nginx:latest -t gcr.io/charged-formula-262616/web-nginx:$SHA -f ./nginx/Dockerfile ./nginx
# docker build -t gcr.io/charged-formula-262616/web-server:latest -t gcr.io/charged-formula-262616/web-server:$SHA -f ./server/Dockerfile ./server
docker push gcr.io/charged-formula-262616/web-client
docker push gcr.io/charged-formula-262616/web-nginx
docker push gcr.io/charged-formula-262616/web-server
# curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-274.0.1-linux-x86_64.tar.gz
# tar zxvf google-cloud-sdk-274.0.1-linux-x86_64.tar.gz google-cloud-sdk
# ./google-cloud-sdk/install.sh
# sudo docker container stop $(docker container ls -aq)
# echo "1 " | gcloud init
ssh -o StrictHostKeyChecking=no -i deploy_key_open travis-ci#104.196.226.118 << EOF
source /home/travis-ci/google-cloud-sdk/path.bash.inc
gcloud auth configure-docker
sudo docker-credential-gcloud list
sudo docker pull gcr.io/charged-formula-262616/web-nginx
sudo docker pull gcr.io/charged-formula-262616/web-client
sudo docker pull gcr.io/charged-formula-262616/web-server
sudo docker run --rm -d -p 3000:3000 gcr.io/charged-formula-262616/web-client
sudo docker run --rm -d -p 80:80 -p 443:443 gcr.io/charged-formula-262616/web-nginx
sudo docker run --rm -d -p 5000:5000 gcr.io/charged-formula-262616/web-server
sudo docker run --rm -d -v /database_data:/var/lib/postgresql/data -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB postgres
EOF
The error you posted includes a link to Authentication methods where suggests some mechanisms to authenticate docker such as:
gcloud auth configure-docker
And other more advanced authentication methods. I recommend that you check this out as it will guide you to solve your issue.
To install the gcloud command you can follow the guide in Installing Google Cloud SDK. That for Linux are this.

Docker Redis start with persistent storage using -v gives error (chown: changing ownership of '.': Permission denied)

I'm using following system version/spec for the docker-redis setup using default redis.conf.
Redhat version: 7.6 (Red Hat Enterprise Linux Server)
Redis Version: 5.0.4
Docker Version: 1.13.1, build b2f74b2/1.13.1
When I run following command it's working perfectly fine.
sudo docker run -d -v $PWD/redis.conf:/usr/local/etc/redis/redis.conf --name redis-persistance --net tyk -p 7070:6379 redis redis-server /usr/local/etc/redis/redis.conf --appendonly yes
I need to get redis data (which is in /data inside the container) to the host directory (/usr/local/etc/redis/data) (-v $PWD/data:/data). So when I run following command I'm getting the below error.
Note $PWD = /usr/local/etc/redis/
sudo docker run -d -v $PWD/redis.conf:/usr/local/etc/redis/redis.conf -v $PWD/data:/data --name redis-persistance --net tyk -p 7070:6379 redis redis-server /usr/local/etc/redis/redis.conf --appendonly yes
Error in docker logs:
journal: chown: changing ownership of '.': Permission denied
level=warning msg="05ce842f052e28566aed0e2eab32281138462cead771033790266ae145fce116 cleanup: failed to unmount secrets: invalid argument"
Also I tried changing the ownership of the data folder in the host to following as well. chown redis:redis data
drwxrwxrwx. 2 redis redis 6 May 3 07:11 data
Can someone help me out on this. Thanks.
First create a volume:
docker volume create redis_data
Check the volume is created (note the Mountpoint):
docker volume inspect redis_data
Then use this volume to start your container:
sudo docker run -d -v $PWD/redis.conf:/usr/local/etc/redis/redis.conf -v redis_data:/data --name redis-persistance --net tyk -p 7070:6379 redis redis-server /usr/local/etc/redis/redis.conf --appendonly yes
You can then check the contents of the "Mountpoint" that should be the redis data.

Resources