How to avoid changing permissions on node_modules for a non-root user in docker - node.js

The issue with my current files is that in my entrypoint.sh file, I have to change the ownership of my entire project directory to the non-administrative user (chown -R node /node-servers). However, when a lot of npm packages are installed, this takes a lot of time. Is there a way to avoid having to chown the node_modules directory?
Background: The reason I create everything as root in the Dockerfile is because this way I can match the UID and GID of a developer's local user. This enables mounting volumes more easily. The downside is that I have to step-down from root in an entrypoint.sh file and ensure that the permissions of the entire project files have all been changed to the non-administrative user.
my docker file:
FROM node:10.24-alpine
#image already has user node and group node which are 1000, thats what we will use
# grab gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.14
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
ca-certificates \
dpkg \
gnupg \
; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apk del --no-network .gosu-deps; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY ./ /node-servers
# Setting the working directory
WORKDIR /node-servers
# Install app dependencies
# Install openssl
RUN apk add --update openssl ca-certificates && \
apk --no-cache add shadow && \
apk add libcap && \
npm install -g && \
chmod +x /node-servers/entrypoint.sh && \
setcap cap_net_bind_service=+ep /usr/local/bin/node
# Entrypoint used to load the environment and start the node server
#ENTRYPOINT ["/bin/sh"]
my entrypoint.sh
# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host
set -x
if [ -z ${HOST_UID+x} ]; then
echo "HOST_UID not set, so we are not changing it"
else
echo "HOST_UID is set, so we are changing the container UID to match"
# get group of notadmin inside container
usermod -u ${HOST_UID} node
CUR_GID=`getent group node | cut -f3 -d: || true`
echo ${CUR_GID}
# if they don't match, adjust
if [ ! -z "$HOST_GID" -a "$HOST_GID" != "$CUR_GID" ]; then
groupmod -g ${HOST_GID} -o node
fi
if ! groups node | grep -q node; then
usermod -aG node node
fi
fi
# gosu drops from root to node user
set -- gosu node "$#"
[ -d "/node-servers" ] && chown -v -R node /node-servers
exec "$#"

You shouldn't need to run chown at all here. Leave the files owned by root (or by the host user). So long as they're world-readable the application will still be able to run; but if there's some sort of security issue or other bug, the application won't be able to accidentally overwrite its own source code.
You can then go on to simplify this even further. For most purposes, users in Unix are identified by their numeric user ID; there isn't actually a requirement that the user be listed in /etc/passwd. If you don't need to change the node user ID and you don't need to chown files, then the entrypoint script reduces to "switch user IDs and run the main script"; but then Docker can provide an alternate user ID for you via the docker run -u option. That means you don't need to install gosu either, which is a lot of the Dockerfile content.
All of this means you can reduce the Dockerfile to:
FROM node:10.24-alpine
# Install OS-level dependencies (before you COPY anything in)
apk add openssl ca-certificates
# (Do not install gosu or its various dependencies)
# Set (and create) the working directory
WORKDIR /node-servers
# Copy language-level dependencies in
COPY package.json package-lock.json .
RUN npm ci
# Copy the rest of the application in
# (make sure `node_modules` is in .dockerignore)
COPY . .
# (Do not call setcap here)
# Set the main command to run
USER node
CMD npm run start
Then when you run the container, you can use Docker options to specify the current user and additional capability.
docker run \
-d \ # in the background
-u $(id -u) \ # as an alternate user
-v "$PWD/data:/node-servers/data" \ # mounting a data directory
-p 8080:80 \ # publishing a port
my-image
Docker grants the NET_BIND_SERVICE capability by default so you don't need to specially set it.
This same permission setup will work if you're using bind mounts to overwrite the application code; again, without a chown call.
docker run ... \
-u $(id -u) \
-v "$PWD:/node-servers" \ # run the application from the host, not the image
-v /node-servers/node_modules \ # with libraries that will not be updated ever
...

Related

Run Ubuntu Docker container as User with sudo access

I'm trying to write a Dockerfile that creates a user with a home directory who is part of sudoers group and that launches the container as this user.
The problem I'm facing is that, from within the container, every command needs to be prepended sudo, which obviously creates permission issues for every file that's created.
My reasoning behind doing this is that I want a container that mimics a clean linux environment from which I can write install scripts for users.
Here is a copy of my Dockerfile so far:
FROM ubuntu:20.04
# Make user home
RUN mkdir -p /home/nick
# Create a nick user
RUN useradd -r -d /home/nick -m -s /sbin/nologin -c "Docker image user" nick
# Add to sudoers
RUN usermod -a -G sudo nick
# Change ownership of home directory
RUN chown -R nick:nick $HOME
# Set password
RUN echo "nick:******" | chpasswd
# Install sudo
RUN apt-get -y update && apt-get -y install sudo
ENV HOME=/home/nick
WORKDIR $HOME
USER nick
I don't understand why this doesn't work:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "brandojazz#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
# CMD /bin/bash

How to add SSH access to a docker container

I have the following DOCKER FILE
FROM alpine:3.10 as builder
ARG VERSION=7.12.0
ARG DISTRO=tomcat
ARG SNAPSHOT=true
ARG EE=false
ARG USER
ARG PASSWORD
RUN apk add --no-cache \
ca-certificates \
maven \
tar \
wget \
xmlstarlet
COPY settings.xml download.sh camunda-tomcat.sh camunda-wildfly.sh /tmp/
RUN /tmp/download.sh
#Enable Basic AUTH
COPY web.xml /camunda/webapps/engine-rest/WEB-INF/web.xml
##### FINAL IMAGE #####
FROM alpine:3.10
ARG VERSION=7.12.0
ENV CAMUNDA_VERSION=${VERSION}
ENV DB_DRIVER=com.microsoft.sqlserver.jdbc.SQLServerDriver
ENV DB_URL=xx
ENV DB_USERNAME=dbname#xx
ENV DB_PASSWORD=xx
ENV DB_CONN_MAXACTIVE=20
ENV DB_CONN_MINIDLE=5
ENV DB_CONN_MAXIDLE=20
ENV DB_VALIDATE_ON_BORROW=true
ENV DB_VALIDATION_QUERY="SELECT 1"
ENV SKIP_DB_CONFIG=
ENV WAIT_FOR=
ENV WAIT_FOR_TIMEOUT=120
ENV TZ=UTC
ENV DEBUG=TRUE
ENV JAVA_OPTS="-Xmx768m -XX:MaxMetaspaceSize=256m"
EXPOSE 8080 8000
# Downgrading wait-for-it is necessary until this PR is merged
# https://github.com/vishnubob/wait-for-it/pull/68
RUN apk add --no-cache \
bash \
ca-certificates \
openjdk11-jre-headless \
tzdata \
tini \
xmlstarlet \
&& wget -O /usr/local/bin/wait-for-it.sh \
"https://raw.githubusercontent.com/vishnubob/wait-for-it/a454892f3c2ebbc22bd15e446415b8fcb7c1cfa4/wait-for-it.sh" --no-check-certificate \
&& chmod +x /usr/local/bin/wait-for-it.sh
RUN addgroup -g 1000 -S camunda && \
adduser -u 1000 -S camunda -G camunda -h /camunda -s /bin/bash -D camunda
WORKDIR /camunda
USER camunda
#MSSQL SERVER JDBC DRIVER INSTALL
COPY mssql-jdbc-7.2.2.jre11.jar /camunda/lib/
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./camunda.sh"]
COPY --chown=camunda:camunda --from=builder /camunda .
This runs a CAMUNDA workflow Engine with an External SQL Paas Database and it works perfectly fine.
However in order to troubleshoot I need to be able to SSH into the container.
I found on this website how to do it:
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image
However the problem is that both ENTRYPOINT and CMD only allows ONE command, so I am not sure how to start up SSH
# ssh
ENV SSH_PASSWD "root:xyz"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
# end ssh config
The Azure docs on this could be a bit better but you're almost there.
Firstly, since you're using Alpine Linux, your Dockerfile steps are a bit different from their example. Notably, you use apk add instead of apt-get install. Take a look at this guide which has examples of setting up SSH for Azure with Alpine.
RUN apk add openssh \
&& echo "root:Docker!" | chpasswd
COPY ./path/to/sshd_config /etc/ssh/
The sshd_config should look something like this:
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
PidFile /etc/ssh/run/sshd.pid
HostKey /etc/ssh/ssh_host_rsa_key
The last step is to make sure that sshd gets started when the container starts up. While you're right that CMD can only take one command, that command can be a script which runs multiple things. By default, sshd forks a background process rather than running in the foreground so you should be ok. Your startup command could look like this for example:
#!/bin/sh
# ...
# Start sshd for Azure
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
mkdir -p /etc/ssh/run
/usr/sbin/sshd
# Run any additional commands like ./camunda.sh
Azure has some repositories with full sample projects including the SSH setup. Here's a good example although it is Ubuntu and your container is Alpine so it's a bit different.
Here are some suggestions:
create a custom script that you will run at container startup ( CMD tag) that starts the ssh daemon and your other services
(more hacky) like in this answer simply put everything in your CMD

APK Docker Unable to lock database: Permission denied

I have the following errors on docker build
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
Weird thing is that the first section of APK ADD works fine:
Step 31/41 : RUN apk add --no-cache bash ca-certificates openjdk11-jre-headless tzdata
tini xmlstarlet && wget -O /usr/local/bin/wait-for-it.sh "https://raw.githubusercontent.com/vishnubob/wait-for-it/a454892f3c2ebbc22bd15e446415b8fcb7c1cfa4/wait-for-it.sh" --no-check-certificate && chmod +x /usr/local/bin/wait-for-it.sh
but the second part doesnt:
Step 36/41 : RUN apk add openssh && echo "root:Docker!" | chpasswd
---> Running in 5626e233c96d
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
my docker file is below
FROM alpine:3.10 as builder
ARG VERSION=7.12.0
ARG DISTRO=tomcat
ARG SNAPSHOT=true
ARG EE=false
ARG USER
ARG PASSWORD
RUN apk add --no-cache \
ca-certificates \
maven \
tar \
wget \
xmlstarlet
COPY settings.xml download.sh camunda-tomcat.sh camunda-wildfly.sh /tmp/
RUN /tmp/download.sh
#Enable Basic AUTH
COPY web.xml /camunda/webapps/engine-rest/WEB-INF/web.xml
##### FINAL IMAGE #####
FROM alpine:3.10
ARG VERSION=7.12.0
ENV CAMUNDA_VERSION=${VERSION}
ENV DB_DRIVER=com.microsoft.sqlserver.jdbc.SQLServerDriver
ENV DB_URL=xxx
ENV DB_USERNAME=xx
ENV DB_PASSWORD=xx
ENV DB_CONN_MAXACTIVE=20
ENV DB_CONN_MINIDLE=5
ENV DB_CONN_MAXIDLE=20
ENV DB_VALIDATE_ON_BORROW=true
ENV DB_VALIDATION_QUERY="SELECT 1"
ENV SKIP_DB_CONFIG=
ENV WAIT_FOR=
ENV WAIT_FOR_TIMEOUT=120
ENV TZ=UTC
ENV DEBUG=TRUE
ENV JAVA_OPTS="-Xmx768m -XX:MaxMetaspaceSize=256m"
EXPOSE 8080 8000
# Downgrading wait-for-it is necessary until this PR is merged
# https://github.com/vishnubob/wait-for-it/pull/68
RUN apk add --no-cache \
bash \
ca-certificates \
openjdk11-jre-headless \
tzdata \
tini \
xmlstarlet \
&& wget -O /usr/local/bin/wait-for-it.sh \
"https://raw.githubusercontent.com/vishnubob/wait-for-it/a454892f3c2ebbc22bd15e446415b8fcb7c1cfa4/wait-for-it.sh" --no-check-certificate \
&& chmod +x /usr/local/bin/wait-for-it.sh
RUN addgroup -g 1000 -S camunda && \
adduser -u 1000 -S camunda -G camunda -h /camunda -s /bin/bash -D camunda
WORKDIR /camunda
USER camunda
#MSSQL SERVER JDBC DRIVER INSTALL
COPY mssql-jdbc-7.2.2.jre11.jar /camunda/lib/
# ssh
RUN apk add openssh \
&& echo "root:Docker!" | chpasswd
COPY sshd_config /etc/ssh/
EXPOSE 80 2222
# end ssh config
ENTRYPOINT ["/sbin/tini", "--"]
CMD "./camunda.sh" && "/usr/sbin/sshd"
COPY --chown=camunda:camunda --from=builder /camunda .
USER camunda
...
RUN apk add openssh
camunda user can't install apk packages, he doesn't have permissions to do so. Install all packages before switching the user. Or switch the user later, just before setting CMD depending on what do you want to do. Or add sudo and add NOPASSWD awk to sudoers file for camunda and do it with sudo. Either way - make sure you have permissions to run apk and following chpassw.

Is it possible to map a user inside the docker container to an outside user?

I know that one can use the --user option with Docker to run a container as a certain user, but in my case, my Docker image has a user inside it, let us call that user manager. Now is it possible to map that user to a user on host? For example, if there is a user john on the host, can we map john to manager?
Yes, you can set the user from the host, but you should modify your Dockerfile a bit to deal with run time user.
FROM alpine:latest
# Override user name at build. If build-arg is not passed, will create user named `default_user`
ARG DOCKER_USER=default_user
# Create a group and user
RUN addgroup -S $DOCKER_USER && adduser -S $DOCKER_USER -G $DOCKER_USER
# Tell docker that all future commands should run as this user
USER $DOCKER_USER
Now, build the Docker image:
docker build --build-arg DOCKER_USER=$(whoami) -t docker_user .
The new user in Docker will be the Host user.
docker run --rm docker_user ash -c "whoami"
Another way is to pass host user ID and group ID without creating the user in Dockerfile.
export UID=$(id -u)
export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
alpine ash -c "whoami"
You can further read more about the user in docker here and here.
Another way is through an entrypoint.
Example
This example relies on gosu which is present in recent Debian derivatives, not yet in Alpine 3.13 (but is in edge).
You could run this image as follow:
docker run --rm -it \
--env UID=$(id -u) \
--env GID=$(id -g) \
-v "$(pwd):$(pwd)" -w "$(pwd)" \
imagename
tree
.
├── Dockerfile
└── files/
└── entrypoint
Dockerfile
FROM ...
# [...]
ARG DOCKER_USER=default_user
RUN addgroup "$DOCKER_USER" \
&& adduser "$DOCKER_USER" -G "$DOCKER_USER"
RUN wget -O- https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64 |\
install /dev/stdin /usr/local/bin/gosu
COPY files /
RUN chmod 0755 /entrypoint \
&& sed "s/\$DOCKER_USER/$DOCKER_USER/g" -i /entrypoint
ENTRYPOINT ["/entrypoint"]
files/entrypoint
#!/bin/sh
set -e
set -u
: "${UID:=0}"
: "${GID:=${UID}}"
if [ "$#" = 0 ]
then set -- "$(command -v bash 2>/dev/null || command -v sh)" -l
fi
if [ "$UID" != 0 ]
then
usermod -u "$UID" "$DOCKER_USER" 2>/dev/null && {
groupmod -g "$GID" "$DOCKER_USER" 2>/dev/null ||
usermod -a -G "$GID" "$DOCKER_USER"
}
set -- gosu "${UID}:${GID}" "${#}"
fi
exec "$#"
Notes
UID is normally a read-only variable in bash, but it will work as expected if set by the docker --env flag
I choose gosu for it's simplicity, but you could make it work with su or sudo; it will need more configuration however
if you don't want to specify two --env switch, you could do something like: --env user="$(id -u):$(id -g)" and in the entrypoint: uid=${user%:*} gid=${user#*:}; note at this point the UID variable will be read-only in bash that's why I switched to lower-case... rest of the adaptation is left to the reader
There is no simple solution that handles all use cases. Solving these problems is continuous work, a part of life in the containerized world.
There is no magical parameter that you could add to a docker exec or docker run invocation and reliably cause the containerized software to no longer run into permissions issues during operations on host-mapped volumes. Unless your mapped directories are chmod-0777-and-come-what-may (DON'T), you will be running into permissions issues and you will be solving them as you go, and this is the task you should try becoming efficient at, instead of trying to find a miracle once-and-forever solution that will never exist.

Set docker image username at container creation time?

I have an OpenSuse 42.3 docker image that I've configured to run a code. The image has a single user(other than root) called "myuser" that I create during the initial Image generation via the Dockerfile. I have three script files that generate a container from the image based on what operating system a user is on.
Question: Can the username "myuser" in the container be set to the username of the user that executes the container generation script?
My goal is to let a user pop into the container interactively and be able to run the code from within the container. The code is just a single binary that executes and has some IO, so I want the user's directory to be accessible from within the container so that they can navigate to a folder on their machine and run the code to generate output in their filesystem.
Below is what I have constructed so far. I tried setting the USER environment variable during the linux script's call to docker run, but that didn't change the user from "myuser" to say "bob" (the username on the host machine that started the container). The mounting of the directories seems to work fine. I'm not sure if it is even possible to achieve my goal.
Linux Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=${username} --workdir="/home/myuser" \
--volume="${home}:/home/myuser" ${imageName} /bin/bash \
Mac Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} \
--workdir="/home/myuser" \
--v="${home}:/home/myuser" ${imageName} /bin/bash \
Windows Container script:
ECHO OFF
SET imageName="myImage:ImageTag"
SET containerName="version1Image"
docker run -it -d --name %containerName% --workdir="/home/myuser" -v="%USERPROFILE%:/home/myuser" %imageName% /bin/bash
echo "Container %containerName% was created."
echo "Run the ./startWindowsLociStream script to launch container"
The below code has been checked into https://github.com/bmitch3020/run-as-user.
I would handle this in an entrypoint.sh that checks the ownership of /home/myuser and updates the uid/gid of the user inside your container. It can look something like:
#!/bin/sh
set -x
# get uid/gid
USER_UID=`ls -nd /home/myuser | cut -f3 -d' '`
USER_GID=`ls -nd /home/myuser | cut -f4 -d' '`
# get the current uid/gid of myuser
CUR_UID=`getent passwd myuser | cut -f3 -d: || true`
CUR_GID=`getent group myuser | cut -f3 -d: || true`
# if they don't match, adjust
if [ ! -z "$USER_GID" -a "$USER_GID" != "$CUR_GID" ]; then
groupmod -g ${USER_GID} myuser
fi
if [ ! -z "$USER_UID" -a "$USER_UID" != "$CUR_UID" ]; then
usermod -u ${USER_UID} myuser
# fix other permissions
find / -uid ${CUR_UID} -mount -exec chown ${USER_UID}.${USER_GID} {} \;
fi
# drop access to myuser and run cmd
exec gosu myuser "$#"
And here's some lines from a relevant Dockerfile:
FROM debian:9
ARG GOSU_VERSION=1.10
# run as root, let the entrypoint drop back to myuser
USER root
# install prereq debian packages
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
vim \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gosu
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod 755 /usr/local/bin/gosu \
&& gosu nobody true
RUN useradd -d /home/myuser -m myuser
WORKDIR /home/myuser
# entrypoint is used to update uid/gid and then run the users command
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD /bin/sh
Then to run it, you just need to mount /home/myuser as a volume and it will adjust permissions in the entrypoint. e.g.:
$ docker build -t run-as-user .
$ docker run -it --rm -v $(pwd):/home/myuser run-as-user /bin/bash
Inside that container you can run id and ls -l to see that you have access to /home/myuser files.
Usernames are not important. What is important are the uid and gid values.
User myuser inside your container will have a uid of 1000 (first non-root user id). Thus when you start your container and look at the container process from the host machine, you will see that the container is owned by whatever user having a uid of 1000 on the host machine.
You can override this by specifying the user once you run your container using:
docker run --user 1001 ...
Therefore if you want the user inside the container, to be able to access files on the host machine owned by a user having a uid of 1005 say, just run the container using --user 1005.
To better understand how users map between the container and host take a look at this wonderful article. https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
First of all (https://docs.docker.com/engine/reference/builder/#arg):
Warning: It is not recommended to use build-time variables for passing
secrets like github keys, user credentials etc. Build-time variable
values are visible to any user of the image with the docker history
command.
But if you still need to do this, read https://docs.docker.com/engine/reference/builder/#arg:
A Dockerfile may include one or more ARG instructions. For example,
the following is a valid Dockerfile:
FROM busybox
ARG user1
ARG buildno
...
and https://docs.docker.com/engine/reference/builder/#user:
The USER instruction sets the user name (or UID) and optionally the
user group (or GID) to use when running the image and for any RUN, CMD
and ENTRYPOINT instructions that follow it in the Dockerfile.
USER <user>[:<group>] or
USER <UID>[:<GID>]

Resources