Spark kubernetes pod fails with no discernable errors - apache-spark

I'm using spark-submit on version 2.4.5 to create a spark driver pod on my k8s cluster. When I run
bin/spark-submit
--master k8s://https://my-cluster-url:443
--deploy-mode cluster
--name spark-test
--class com.my.main.Class
--conf spark.executor.instances=3
--conf spark.kubernetes.allocation.batch.size=3
--conf spark.kubernetes.namespace=my-namespace
--conf spark.kubernetes.container.image.pullSecrets=my-cr-secret
--conf spark.kubernetes.container.image.pullPolicy=Always
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.my-vol.mount.path=/opt/spark/work-dir/src/main/resources/
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.my-vol.options.claimName=my-pvc
--conf spark.kubernetes.container.image=my-registry.io/spark-test:test-2.4.5
local:///opt/spark/work-dir/my-service.jar
spark-submit successfully creates a pod in my k8s cluster, and the pod makes it into the running state. The pod then quickly stops with an error status. Looking at the pod's logs I see
++ id -u
+ myuid=0
++ id -g
+ mygid=0
+ set +e
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/bash
+ set -e
+ '[' -z root:x:0:0:root:/root:/bin/bash ']'
+ SPARK_K8S_CMD=driver
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ sed 's/[^=]*=\(.*\)/\1/g'
+ sort -t_ -k4 -n
+ grep SPARK_JAVA_OPT_
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '' ']'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' '' == 2 ']'
+ '[' '' == 3 ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$#")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=<SPARK_DRIVER_BIND_ADDRESS> --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class com.my.main.Class spark-internal
20/03/04 16:44:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
But no other errors. The + lines in the log correspond to the commands executed in kubernetes/dockerfiles/spark/entrypoint.sh in the spark distribution. So it looks like it makes it through the entire entrypoint script, and attempts to run the final command exec /usr/bin/tini -s -- "${CMD[#]}"
before failing after those log4j warnings. How can I debug this issue further?
edit for more details:
Pod events, as seen in kubectl describe po ...:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m41s default-scheduler Successfully assigned my-namespace/spark-test-1583356942292-driver to aks-agentpool-12301882-10
Warning FailedMount 3m40s kubelet, aks-agentpool-12301882-10 MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "spark-test-1583356942292-driver-conf-map" not found
Normal Pulling 3m37s kubelet, aks-agentpool-12301882-10 Pulling image "my-registry.io/spark-test:test-2.4.5"
Normal Pulled 3m37s kubelet, aks-agentpool-12301882-10 Successfully pulled image "my-registry.io/spark-test:test-2.4.5"
Normal Created 3m36s kubelet, aks-agentpool-12301882-10 Created container spark-kubernetes-driver
Normal Started 3m36s kubelet, aks-agentpool-12301882-10 Started container spark-kubernetes-driver
My Dockerfile – slightly adapted from the provided spark Dockerfile, and built using ./bin/docker-image-tool.sh:
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
FROM openjdk:8-jdk-slim
ARG spark_jars=jars
ARG img_path=kubernetes/dockerfiles
ARG k8s_tests=kubernetes/tests
ARG work_dir=/opt/spark/work-dir
# Before building the docker image, first build and make a Spark distribution following
# the instructions in http://spark.apache.org/docs/latest/building-spark.html.
# If this docker file is being used in the context of building your images from a Spark
# distribution, the docker build command should be invoked from the top level directory
# of the Spark distribution. E.g.:
# docker build -t spark:latest -f kubernetes/dockerfiles/spark/Dockerfile .
RUN set -ex && \
apt-get update && \
ln -s /lib /lib64 && \
apt install -y bash tini libc6 libpam-modules libnss3 && \
mkdir -p /opt/spark && \
mkdir -p ${work_dir} && \
mkdir -p /opt/spark/conf && \
touch /opt/spark/RELEASE && \
rm /bin/sh && \
ln -sv /bin/bash /bin/sh && \
echo "auth required pam_wheel.so use_uid" >> /etc/pam.d/su && \
chgrp root /etc/passwd && chmod ug+rw /etc/passwd && \
rm -rf /var/cache/apt/* && \
mkdir -p ${work_dir}/src/main/resources && \
mkdir -p /var/run/my-service && \
mkdir -p /var/log/my-service
COPY ${spark_jars} /opt/spark/jars
COPY bin /opt/spark/bin
COPY sbin /opt/spark/sbin
COPY ${img_path}/spark/entrypoint.sh /opt/
COPY examples /opt/spark/examples
COPY ${k8s_tests} /opt/spark/tests
COPY data /opt/spark/data
ADD conf/log4j.properties.template /opt/spark/conf/log4j.properties
ADD kubernetes/jars/my-service-*-bin.tar.gz ${work_dir}
RUN mv "${work_dir}/my-service-"*".jar" "${work_dir}/my-service.jar"
ENV SPARK_HOME /opt/spark
WORKDIR ${work_dir}
ENTRYPOINT [ "/opt/entrypoint.sh" ]

Related

podman inside podman: works only with "privileged" while it works without for the official podman image

I am trying to create a podman image that allows me to run rootless podman inside rootless podman.
I have read https://www.redhat.com/sysadmin/podman-inside-container
and tried to build an image analogous to quay.io/podman/stable:latest based on top of docker.io/python:3.10-slim-bullseye or docker.io/ubuntu:22.04,
but somehow my images require --privileged which the quay.io/podman fedora-based image does not.
For reference, here what does work for quay.io/podman/stable:latest:
$ podman run --rm \
--security-opt label=disable \
--device /dev/fuse \
--user podman \
quay.io/podman/stable:latest podman info
prints the podman info and no warning/errors, also podman run hellow-world works inside the container as expected.
I have created a dockerfile for a debian/ubuntu-based image that allows running rootless podman inside. The dockerfile closely follows https://www.redhat.com/sysadmin/podman-inside-container and https://github.com/containers/podman/blob/main/contrib/podmanimage/stable/Containerfile
and is shown at the bottom.
However, the resulting image (call it podinpodtest) does not work as expected:
$ podman run --rm \
--security-opt label=disable \
--device /dev/fuse \
--user podman \
podinpodtest podman info
results in Error: cannot setup namespace using newuidmap: exit status 1.
Adding --privileged makes the image work:
$ podman run --rm \
--security-opt label=disable \
--device /dev/fuse \
--user podman \
--privileged \
podinpodtest podman info
correctly prints the podman info.
Why does the debian/ubuntu based image require --privileged for running rootless podman inside of it?
I do not want to run the image with --privileged – can the debian/ubuntu based image be fixed to work similarly to the quay.io/podman image?
#FROM docker.io/python:3.10-slim-bullseye
FROM docker.io/ubuntu:22.04
RUN apt-get update && apt-get install -y \
containers-storage \
fuse-overlayfs \
libvshadow-utils \
podman \
&& rm -rf /var/lib/apt/lists/*
RUN useradd podman; \
echo "podman:1:999\npodman:1001:64535" > /etc/subuid; \
echo "podman:1:999\npodman:1001:64535" > /etc/subgid;
ARG _REPO_URL="https://raw.githubusercontent.com/containers/podman/main/contrib/podmanimage/stable"
ADD $_REPO_URL/containers.conf /etc/containers/containers.conf
ADD $_REPO_URL/podman-containers.conf /home/podman/.config/containers/containers.conf
RUN mkdir -p /home/podman/.local/share/containers && \
chown podman:podman -R /home/podman && \
chmod 644 /etc/containers/containers.conf
# Copy & modify the defaults to provide reference if runtime changes needed.
# Changes here are required for running with fuse-overlay storage inside container.
RUN sed -e 's|^#mount_program|mount_program|g' \
-e '/additionalimage.*/a "/var/lib/shared",' \
-e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' \
/usr/share/containers/storage.conf \
> /etc/containers/storage.conf
# Note VOLUME options must always happen after the chown call above
# RUN commands can not modify existing volumes
VOLUME /var/lib/containers
VOLUME /home/podman/.local/share/containers
RUN mkdir -p /var/lib/shared/overlay-images \
/var/lib/shared/overlay-layers \
/var/lib/shared/vfs-images \
/var/lib/shared/vfs-layers && \
touch /var/lib/shared/overlay-images/images.lock && \
touch /var/lib/shared/overlay-layers/layers.lock && \
touch /var/lib/shared/vfs-images/images.lock && \
touch /var/lib/shared/vfs-layers/layers.lock
ENV _CONTAINERS_USERNS_CONFIGURED=""

How to avoid changing permissions on node_modules for a non-root user in docker

The issue with my current files is that in my entrypoint.sh file, I have to change the ownership of my entire project directory to the non-administrative user (chown -R node /node-servers). However, when a lot of npm packages are installed, this takes a lot of time. Is there a way to avoid having to chown the node_modules directory?
Background: The reason I create everything as root in the Dockerfile is because this way I can match the UID and GID of a developer's local user. This enables mounting volumes more easily. The downside is that I have to step-down from root in an entrypoint.sh file and ensure that the permissions of the entire project files have all been changed to the non-administrative user.
my docker file:
FROM node:10.24-alpine
#image already has user node and group node which are 1000, thats what we will use
# grab gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.14
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
ca-certificates \
dpkg \
gnupg \
; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apk del --no-network .gosu-deps; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY ./ /node-servers
# Setting the working directory
WORKDIR /node-servers
# Install app dependencies
# Install openssl
RUN apk add --update openssl ca-certificates && \
apk --no-cache add shadow && \
apk add libcap && \
npm install -g && \
chmod +x /node-servers/entrypoint.sh && \
setcap cap_net_bind_service=+ep /usr/local/bin/node
# Entrypoint used to load the environment and start the node server
#ENTRYPOINT ["/bin/sh"]
my entrypoint.sh
# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host
set -x
if [ -z ${HOST_UID+x} ]; then
echo "HOST_UID not set, so we are not changing it"
else
echo "HOST_UID is set, so we are changing the container UID to match"
# get group of notadmin inside container
usermod -u ${HOST_UID} node
CUR_GID=`getent group node | cut -f3 -d: || true`
echo ${CUR_GID}
# if they don't match, adjust
if [ ! -z "$HOST_GID" -a "$HOST_GID" != "$CUR_GID" ]; then
groupmod -g ${HOST_GID} -o node
fi
if ! groups node | grep -q node; then
usermod -aG node node
fi
fi
# gosu drops from root to node user
set -- gosu node "$#"
[ -d "/node-servers" ] && chown -v -R node /node-servers
exec "$#"
You shouldn't need to run chown at all here. Leave the files owned by root (or by the host user). So long as they're world-readable the application will still be able to run; but if there's some sort of security issue or other bug, the application won't be able to accidentally overwrite its own source code.
You can then go on to simplify this even further. For most purposes, users in Unix are identified by their numeric user ID; there isn't actually a requirement that the user be listed in /etc/passwd. If you don't need to change the node user ID and you don't need to chown files, then the entrypoint script reduces to "switch user IDs and run the main script"; but then Docker can provide an alternate user ID for you via the docker run -u option. That means you don't need to install gosu either, which is a lot of the Dockerfile content.
All of this means you can reduce the Dockerfile to:
FROM node:10.24-alpine
# Install OS-level dependencies (before you COPY anything in)
apk add openssl ca-certificates
# (Do not install gosu or its various dependencies)
# Set (and create) the working directory
WORKDIR /node-servers
# Copy language-level dependencies in
COPY package.json package-lock.json .
RUN npm ci
# Copy the rest of the application in
# (make sure `node_modules` is in .dockerignore)
COPY . .
# (Do not call setcap here)
# Set the main command to run
USER node
CMD npm run start
Then when you run the container, you can use Docker options to specify the current user and additional capability.
docker run \
-d \ # in the background
-u $(id -u) \ # as an alternate user
-v "$PWD/data:/node-servers/data" \ # mounting a data directory
-p 8080:80 \ # publishing a port
my-image
Docker grants the NET_BIND_SERVICE capability by default so you don't need to specially set it.
This same permission setup will work if you're using bind mounts to overwrite the application code; again, without a chown call.
docker run ... \
-u $(id -u) \
-v "$PWD:/node-servers" \ # run the application from the host, not the image
-v /node-servers/node_modules \ # with libraries that will not be updated ever
...

Class not found when running a jar on spark managed with google kubernetes engine

I am trying to follow this link to run my jar as a spark job on google kubernetes engine.
I have tried some things:-
Copied my jar in /examples/jars and tried running.
However when I run
sudo /opt/bin/spark-submit --master k8s://https://35.192.214.68 --deploy-mode cluster --name sparkIgnite --class org.blk.igniteSparkResearch.ScalarSharedRDDExample --conf spark.executor.instances=3 --conf spark.app.name=sharedSparkIgnite --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.container.image=us.gcr.io/nlp-research-198620/spark:k8s-spark-2.3 local:///opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar
and check the logs of my pod, I get
++ id -u
+ myuid=0
++ id -g
+ mygid=0
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/ash
+ '[' -z root:x:0:0:root:/root:/bin/ash ']'
+ SPARK_K8S_CMD=driver
+ '[' -z driver ']'
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_JAVA_OPTS
+ '[' -n /opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar:/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*:/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar:/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar'
+ '[' -n '' ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[#]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS)
+ exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.kubernetes.driver.pod.name=sparkignite-a20d8d85b6b6389293be4b1fe8a12803-driver -Dspark.driver.port=7078 -Dspark.jars=/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar,/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar -Dspark.app.name=sparkIgnite -Dspark.driver.blockManager.port=7079 -Dspark.driver.host=sparkignite-a20d8d85b6b6389293be4b1fe8a12803-driver-svc.default.svc -Dspark.kubernetes.authenticate.driver.serviceAccountName=spark -Dspark.master=k8s://https://35.192.214.68 -Dspark.app.id=spark-b0523468df0b4751a2d94c3b9513c19f -Dspark.submit.deployMode=cluster -Dspark.executor.instances=3 -Dspark.kubernetes.container.image=us.gcr.io/nlp-research-198620/spark:k8s-spark-2.3 -Dspark.kubernetes.executor.podNamePrefix=sparkignite-a20d8d85b6b6389293be4b1fe8a12803 -cp ':/opt/spark/jars/*:/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar:/opt/spark/examples/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.8.2.54 org.blk.igniteSparkResearch.ScalarSharedRDDExample
Error: Could not find or load main class org.blk.igniteSparkResearch.ScalarSharedRDDExample
I am only able to run jars which are already packed with default spark 2.30 version. How to run any custom jar on spark is I want to know.
Thanks in advance.
It seems I need to create an image wherein I need to copy my jar and then run the image. When I follow this, it works.

Set docker image username at container creation time?

I have an OpenSuse 42.3 docker image that I've configured to run a code. The image has a single user(other than root) called "myuser" that I create during the initial Image generation via the Dockerfile. I have three script files that generate a container from the image based on what operating system a user is on.
Question: Can the username "myuser" in the container be set to the username of the user that executes the container generation script?
My goal is to let a user pop into the container interactively and be able to run the code from within the container. The code is just a single binary that executes and has some IO, so I want the user's directory to be accessible from within the container so that they can navigate to a folder on their machine and run the code to generate output in their filesystem.
Below is what I have constructed so far. I tried setting the USER environment variable during the linux script's call to docker run, but that didn't change the user from "myuser" to say "bob" (the username on the host machine that started the container). The mounting of the directories seems to work fine. I'm not sure if it is even possible to achieve my goal.
Linux Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=${username} --workdir="/home/myuser" \
--volume="${home}:/home/myuser" ${imageName} /bin/bash \
Mac Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} \
--workdir="/home/myuser" \
--v="${home}:/home/myuser" ${imageName} /bin/bash \
Windows Container script:
ECHO OFF
SET imageName="myImage:ImageTag"
SET containerName="version1Image"
docker run -it -d --name %containerName% --workdir="/home/myuser" -v="%USERPROFILE%:/home/myuser" %imageName% /bin/bash
echo "Container %containerName% was created."
echo "Run the ./startWindowsLociStream script to launch container"
The below code has been checked into https://github.com/bmitch3020/run-as-user.
I would handle this in an entrypoint.sh that checks the ownership of /home/myuser and updates the uid/gid of the user inside your container. It can look something like:
#!/bin/sh
set -x
# get uid/gid
USER_UID=`ls -nd /home/myuser | cut -f3 -d' '`
USER_GID=`ls -nd /home/myuser | cut -f4 -d' '`
# get the current uid/gid of myuser
CUR_UID=`getent passwd myuser | cut -f3 -d: || true`
CUR_GID=`getent group myuser | cut -f3 -d: || true`
# if they don't match, adjust
if [ ! -z "$USER_GID" -a "$USER_GID" != "$CUR_GID" ]; then
groupmod -g ${USER_GID} myuser
fi
if [ ! -z "$USER_UID" -a "$USER_UID" != "$CUR_UID" ]; then
usermod -u ${USER_UID} myuser
# fix other permissions
find / -uid ${CUR_UID} -mount -exec chown ${USER_UID}.${USER_GID} {} \;
fi
# drop access to myuser and run cmd
exec gosu myuser "$#"
And here's some lines from a relevant Dockerfile:
FROM debian:9
ARG GOSU_VERSION=1.10
# run as root, let the entrypoint drop back to myuser
USER root
# install prereq debian packages
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
vim \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gosu
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod 755 /usr/local/bin/gosu \
&& gosu nobody true
RUN useradd -d /home/myuser -m myuser
WORKDIR /home/myuser
# entrypoint is used to update uid/gid and then run the users command
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD /bin/sh
Then to run it, you just need to mount /home/myuser as a volume and it will adjust permissions in the entrypoint. e.g.:
$ docker build -t run-as-user .
$ docker run -it --rm -v $(pwd):/home/myuser run-as-user /bin/bash
Inside that container you can run id and ls -l to see that you have access to /home/myuser files.
Usernames are not important. What is important are the uid and gid values.
User myuser inside your container will have a uid of 1000 (first non-root user id). Thus when you start your container and look at the container process from the host machine, you will see that the container is owned by whatever user having a uid of 1000 on the host machine.
You can override this by specifying the user once you run your container using:
docker run --user 1001 ...
Therefore if you want the user inside the container, to be able to access files on the host machine owned by a user having a uid of 1005 say, just run the container using --user 1005.
To better understand how users map between the container and host take a look at this wonderful article. https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
First of all (https://docs.docker.com/engine/reference/builder/#arg):
Warning: It is not recommended to use build-time variables for passing
secrets like github keys, user credentials etc. Build-time variable
values are visible to any user of the image with the docker history
command.
But if you still need to do this, read https://docs.docker.com/engine/reference/builder/#arg:
A Dockerfile may include one or more ARG instructions. For example,
the following is a valid Dockerfile:
FROM busybox
ARG user1
ARG buildno
...
and https://docs.docker.com/engine/reference/builder/#user:
The USER instruction sets the user name (or UID) and optionally the
user group (or GID) to use when running the image and for any RUN, CMD
and ENTRYPOINT instructions that follow it in the Dockerfile.
USER <user>[:<group>] or
USER <UID>[:<GID>]

Gunicorn around 100% CPU usage

Sometimes my web server just stops responding.
What I found out is that at these moments Gunicorn processes' CPU load is around 100%. I have not changed codebase in a while so I don't think it might cause that.
Here's the bash script I use to run gunicorn:
#!/bin/bash
source /etc/profile.d/myapp.sh
NAME="myapp-web-services"
DJANGODIR="/home/myapp/myapp-web-services"
SOCKFILE=/tmp/myapp-web-services.sock
USER=myapp
GROUP=myapp
NUM_WORKERS=9
TIMEOUT=100
DJANGO_WSGI_MODULE=settings.wsgi
echo "Starting $NAME as `whoami`"
cd $DJANGODIR
source /home/myapp/.virtualenvs/myapp-web-services/bin/activate
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec newrelic-admin run-program /home/myapp/.virtualenvs/myapp-web-services/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=warning \
--timeout=$TIMEOUT \
--log-file=- \
--max-requests=1200
I have 4 CPUs in the system so according to documentation 9 workers should be fine.

Resources