Puppet Git Connection - puppet

just wandering. I am currently deploying VMs using Kickstarts and Puppet. Kickstarts works fine and the installs go brilliantly. However when it comes to puppet it seems to only install a file called puppet-html.cfg. I have other scripts in my Git I need puppet to get such as puppet-common.cfg, puppet-users.cfg and so on. Can someone please look at this script and tell me if there is any errors in there
##
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
##
## Note: these commands are all run by "root" on the VM itself
## .. the finished file is found at /root/anaconda-ks.cfg
##
## adding "echo" lines in here don't actually write anything to the screen
##
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
##
## -- UMASK
# strengthen the default umask
# we do this post-deploy so all users inherit the setting after modifying
# /etc/bashrc
#
# resulting permissions is: 700 dirs, 600 files
#
sed -i 's/umask\s022/umask 077/' /etc/bashrc
sed -i 's/umask\s022/umask 077/' /etc/profile
sed -i 's/umask\s022/umask 077/' /etc/csh.cshrc
### install git and use that to begin deploying puppet configs
yum -y install git
## -- create SSH keys in root's home dir:
# DEPLOY account. This key is pushed to gitea for puppet to use - the install actually uses hal's key first
ssh-keygen -q -b 4096 -t rsa -f /root/.ssh/id_rsa_deploy -N "" -C"deploy#$(hostname -s)"
# ROOT's key...
ssh-keygen -q -b 4096 -t rsa -f /root/.ssh/id_rsa -N ""
### Add "DEPLOY" alias to the SSH CONFIG file - this will be used to pull down Puppet updates
cat << EODEP > /root/.ssh/config
Host deploy
User git
Hostname config.hostname.com
Port 22022
## port 23000
StrictHostKeyChecking no
IdentityFile /root/.ssh/id_rsa_deploy
EODEP
### Register this "deploy" account with gitea
curl -X POST "https://config.tombstones.org.uk:23000/api/v1/user/keys" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: token 2b2182bbbb7e52b3193c4c9718c6e96c372f8156" \
-d "{ \"key\": \"$(cat /root/.ssh/id_rsa_deploy.pub)\", \"read_only\": true, \"title\": \"$(hostname -s)-deploy-$(date +'%s')\"}"
## .. note: this key shows up in the list of keys for the "deploy" gitea user
## ...but also means files can be fetched over ssh using "git#deploy" as an alias
## -- REGISTER GIT HOST KEY AS KNOWN HOST
#ssh -o 'StrictHostKeyChecking no' config.tombstones.org.uk -p 22022 2>/dev/null | echo > /dev/null
ssh -o 'StrictHostKeyChecking no' deploy 2>/dev/null | echo > /dev/null
### -- begin Puppet common stuff (uses "deploy" key)
mkdir -p /var/lib/puppet/manifests
cd /var/lib/puppet/manifests
## -- may be an issue with this syntax, not sure...
#git clone git#deploy:/tombstones/puppet-common.git
#git clone ssh://deploy:/somegitrepo/puppet-common.git
git clone deploy:/somegitrepo/puppet-common.git

Related

Using SSH inside docker with correct file permissions?

There are a few posts on how to use Docker + SSH. There are also posts on how to edit files mounted in a docker container, such that editing them won't cause the permissions to become root.
I'm trying to combine the 2 things, so I can SSH into a docker container and edit files without messing up their permissions.
For, using the correct file permissions, I use:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
in my docker-compose.yml and
docker compose -f commands/dev/docker-compose.yml run \
--service-ports \
--user $(id -u) \
develop \
bash
so that when I start the docker container, my user is the same user as my local computer.
However, this breaks up my SSH setup inside the Docker container:
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
# passwd -d ubuntu
apt install -y --no-install-recommends openssh-server vim-tiny sudo
# See: https://stackoverflow.com/questions/22886470/start-sshd-automatically-with-docker-container
sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
mkdir /var/run/sshd
bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUNLEVEL=1 dpkg-reconfigure openssh-server
ssh-keygen -A -v
update-rc.d ssh defaults
# Configure sudo
ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
Here I'm creating a user called ubuntu with password ubuntu for SSH-ing. This lets me SSH in ubuntu#localhost using the password ubuntu.
The issue is that by mounting the /etc/passwd file into my container, I erase the ubuntu user inside the container. This means when I try to ssh in with ssh -p 9002 ubuntu#localhost, the authentication fails (9002 is what I bind port 22 in the container to on the host).
Does anyone have a solution?
Here's a first pass answer.
I can use:
useradd -rm -d /home/yourusername -s /bin/bash -g root -G sudo yourusername
instead of
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
then, I:
Run the ssh server in the container with:
su root
/usr/sbin/sshd -D -o ListenAddress=0.0.0.0 -o PermitRootLogin=yes
I can ssh into the container as root (using the root password "root", which I set with RUN echo 'root:root' | chpasswd in the Dockerfile).
Then, I can do su yourusername, to switch my user.
While this works, it is pretty annoying since I need to bake the user name into the Docker container.

How to avoid changing permissions on node_modules for a non-root user in docker

The issue with my current files is that in my entrypoint.sh file, I have to change the ownership of my entire project directory to the non-administrative user (chown -R node /node-servers). However, when a lot of npm packages are installed, this takes a lot of time. Is there a way to avoid having to chown the node_modules directory?
Background: The reason I create everything as root in the Dockerfile is because this way I can match the UID and GID of a developer's local user. This enables mounting volumes more easily. The downside is that I have to step-down from root in an entrypoint.sh file and ensure that the permissions of the entire project files have all been changed to the non-administrative user.
my docker file:
FROM node:10.24-alpine
#image already has user node and group node which are 1000, thats what we will use
# grab gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.14
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
ca-certificates \
dpkg \
gnupg \
; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apk del --no-network .gosu-deps; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY ./ /node-servers
# Setting the working directory
WORKDIR /node-servers
# Install app dependencies
# Install openssl
RUN apk add --update openssl ca-certificates && \
apk --no-cache add shadow && \
apk add libcap && \
npm install -g && \
chmod +x /node-servers/entrypoint.sh && \
setcap cap_net_bind_service=+ep /usr/local/bin/node
# Entrypoint used to load the environment and start the node server
#ENTRYPOINT ["/bin/sh"]
my entrypoint.sh
# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host
set -x
if [ -z ${HOST_UID+x} ]; then
echo "HOST_UID not set, so we are not changing it"
else
echo "HOST_UID is set, so we are changing the container UID to match"
# get group of notadmin inside container
usermod -u ${HOST_UID} node
CUR_GID=`getent group node | cut -f3 -d: || true`
echo ${CUR_GID}
# if they don't match, adjust
if [ ! -z "$HOST_GID" -a "$HOST_GID" != "$CUR_GID" ]; then
groupmod -g ${HOST_GID} -o node
fi
if ! groups node | grep -q node; then
usermod -aG node node
fi
fi
# gosu drops from root to node user
set -- gosu node "$#"
[ -d "/node-servers" ] && chown -v -R node /node-servers
exec "$#"
You shouldn't need to run chown at all here. Leave the files owned by root (or by the host user). So long as they're world-readable the application will still be able to run; but if there's some sort of security issue or other bug, the application won't be able to accidentally overwrite its own source code.
You can then go on to simplify this even further. For most purposes, users in Unix are identified by their numeric user ID; there isn't actually a requirement that the user be listed in /etc/passwd. If you don't need to change the node user ID and you don't need to chown files, then the entrypoint script reduces to "switch user IDs and run the main script"; but then Docker can provide an alternate user ID for you via the docker run -u option. That means you don't need to install gosu either, which is a lot of the Dockerfile content.
All of this means you can reduce the Dockerfile to:
FROM node:10.24-alpine
# Install OS-level dependencies (before you COPY anything in)
apk add openssl ca-certificates
# (Do not install gosu or its various dependencies)
# Set (and create) the working directory
WORKDIR /node-servers
# Copy language-level dependencies in
COPY package.json package-lock.json .
RUN npm ci
# Copy the rest of the application in
# (make sure `node_modules` is in .dockerignore)
COPY . .
# (Do not call setcap here)
# Set the main command to run
USER node
CMD npm run start
Then when you run the container, you can use Docker options to specify the current user and additional capability.
docker run \
-d \ # in the background
-u $(id -u) \ # as an alternate user
-v "$PWD/data:/node-servers/data" \ # mounting a data directory
-p 8080:80 \ # publishing a port
my-image
Docker grants the NET_BIND_SERVICE capability by default so you don't need to specially set it.
This same permission setup will work if you're using bind mounts to overwrite the application code; again, without a chown call.
docker run ... \
-u $(id -u) \
-v "$PWD:/node-servers" \ # run the application from the host, not the image
-v /node-servers/node_modules \ # with libraries that will not be updated ever
...

When trying to install openFOAM inside a docker container, How do I set the workDir to a location in my mac user directory

I'm using the following script to install openFOAM in a docker container:
#!/bin/sh
#------------------------------------------------------------------------------
# ========= |
# \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
# \\ / O peration |
# \\ / A nd | Copyright (C) 2017-2020 OpenFOAM Foundation
# \\/ M anipulation |
#-------------------------------------------------------------------------------
# License
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License
# along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
#
# Script
# openfoam8-macos
#
# Description
# Run script for an OpenFOAM 8 Docker image at:
# https://hub.docker.com/r/openfoam
#
#------------------------------------------------------------------------------
Script=${0##*/}
VER=8
usage () {
exec 1>&2
while [ "$#" -ge 1 ]; do echo "$1"; shift; done
cat <<USAGE
Usage: ${0##*/} [OPTIONS]
options:
-d | -dir host directory mounted (defaults to current directory)
-x | -xhost use custom X authority and give container host network
-h | -help help
-p | -paraview include ParaView in the Docker image
Launches the OpenFOAM ${VER} Docker image.
- Requires installation of docker-engine.
- Runs a "containerized" bash shell environment where the user can run OpenFOAM
and, optionally, ParaView (see below).
- The container mounts the user's file system so that case files are stored
permanently. The container mounts the current directory by default, but the
user can also specify a particular directory using the "-d" option.
- Mounting the user's HOME directory is disallowed.
- The '-xhost' option is useful when accessing the host via 'ssh -X'.
This option should only be used when strictly necessary, as it relies on the
option '--net=host' when launching the container in Docker, which will
give to the container full access to the Docker host network stack and
potentially the host's system services that rely on network communication,
making it potentially insecure.
ParaView:
Graphical applications from the Docker container require installation of the
Xquartz X server to display on the host machine. While applications such as
Gedit, Emacs and GnuPlot will run effectively using Xquartz, more intensive
OpenGL applications, in particular ParaView, can be prohibitively slow.
Therefore, the default Docker image does not contain ParaView and users can
instead install ParaView directly from the vendor and use the built-in reader
module for OpenFOAM: http://www.paraview.org/download
However, if the user wishes to include ParaView with the official OpenFOAM
reader module in their Docker container, they can do so with the "-p" option.
Example:
To store data in ${HOME}/OpenFOAM/${USER}-${VER}, the user can launch
${Script} either by:
cd ${HOME}/OpenFOAM/${USER}-${VER} && ${Script}
or
${Script} -d ${HOME}/OpenFOAM/${USER}-${VER}
Further Information:
http://openfoam.org/download/8-macos
Note:
The container user name appears as "openfoam" but it is just an alias.
USAGE
exit 1
}
DOCKER_IMAGE='openfoam/openfoam8-graphical-apps'
MOUNT_DIR=$(pwd)
CUSTOM_XAUTH=""
DOCKER_OPTIONS=""
while [ "$#" -gt 0 ]
do
case "$1" in
-d | -dir)
[ "$#" -ge 2 ] || usage "'$1' option requires an argument"
MOUNT_DIR=$2
shift 2
;;
-x | -xhost)
CUSTOM_XAUTH=yes
shift
;;
-h | -help)
usage
;;
-p | -paraview)
DOCKER_IMAGE='openfoam/openfoam8-paraview56'
shift
;;
*)
usage "Invalid option '$1'"
;;
esac
done
[ -d "$MOUNT_DIR" ] || usage "No directory exists: $MOUNT_DIR"
MOUNT_DIR=$(cd "$MOUNT_DIR" && pwd -P)
[ "$MOUNT_DIR" = "$(cd "$HOME" && pwd -P)" ] && \
usage "Mount directory cannot be the user's home directory" \
"Make a subdirectory and run from there, e.g." \
" mkdir -p ${HOME}/OpenFOAM/$(whoami)-${VER}" \
" ${Script} -d ${HOME}/OpenFOAM/$(whoami)-${VER}"
if [ -n "$CUSTOM_XAUTH" ]
then
XAUTH_PATH="${MOUNT_DIR}/.docker.xauth.$$"
touch "${XAUTH_PATH}"
# Generate a custom X-authority file that allows any hostname
xauth nlist "$DISPLAY" | sed -e 's/^..../ffff/' | \
xauth -f "$XAUTH_PATH" nmerge -
DOCKER_OPTIONS="-e XAUTHORITY=$XAUTH_PATH
-v $XAUTH_PATH:$XAUTH_PATH
--net=host"
fi
USER_ID=$(id -u 2> /dev/null)
[ -n "$USER_ID" ] || usage "Cannot determine current user ID"
GROUP_ID=$(id -g)
HOME_DIR='/home/openfoam'
echo "Launching $0"
echo "User: \"$(id -un)\" (ID $USER_ID, group ID $GROUP_ID)"
IFACES=$(ifconfig | grep ^en | cut -d: -f1)
[ "$IFACES" ] || \
usage "Cannot find a network interface for DISPLAY with ifconfig" \
"Please report an issue at http://bugs.openfoam.org" \
" providing the output of the command: ifconfig"
for I in $IFACES
do
IP=$(ifconfig "$I" | grep inet | awk '$1=="inet" {print $2}')
[ "$IP" ] && break
done
[ "$IP" ] || \
usage "Cannot find a network IP for DISPLAY with ifconfig" \
"Please report an issue at http://bugs.openfoam.org" \
" providing the output of the command: ifconfig"
xhost + "$IP"
docker run -it \
--rm \
-e DISPLAY=$IP:0 \
-u $USER_ID:$GROUP_ID \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $MOUNT_DIR:$HOME_DIR \
$DOCKER_OPTIONS \
$DOCKER_IMAGE
[ -n "$CUSTOM_XAUTH" -a -e "${XAUTH_PATH}" ] && rm "${XAUTH_PATH}"
This creates a user 'ofuser'. Why? Once I run the above script and then
xhost +local:of_v2006
docker start of_v2006
docker attach of_v2006
I end up in docker with:
[ofuser#3032b6018d82 woo]$ whoami
ofuser
I follow instructions to do a simulation and it creates files that are supposed to be available from the mac os itself, i.e. outside the docker container, but they seem to be in locations like /home/ofuser/...../ofuser/run/..., which don't seem to exist outside the container. Part of output:
[ofuser#5b3db0ac969b woo]$ mkdir $FOAM_RUN
mkdir: cannot create directory '/home/ofuser/OpenFOAM/ofuser-v2006/run': No such file or directory
So, how do I make the user 'foam' or 'woo' instead of ofuser. How did that user come to be? How an I set the workDir to be something like /Users/woo/Containers/foam? Do I have to have that /tmp/.X11-unix stuff in the setup? The display doesn't connect to my XQuartz display I've set up, etc.
I have the same setup running and yes it is possible. Took some time for me to get it running as well but I think I can clarify a few things.
First about the question regarding the directory. This can be set up in the dockerfile. The default dockerfile provided was not sufficient for me as well.
See the extract below:
"Mounts": [
...
{
"Type": "bind",
"Source": "/home/*****/OpenFOAM/run",
"Destination": "/home/openfoam",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
......
This binds the "Source" - on your machine - with the "Destination" directory inside the container. The crucial option here for me was the "Z" flag which was not present in the default docker file.
The X11 setup also depends on the rights set by the "Z" flag in the dockerfile and can / should be set there. I am not completely sure about your actual setup but this works fine for me.
"Mounts": [
...
{
"Type": "bind",
"Source": "/tmp/.X11-unix",
"Destination": "/tmp/.X11-unix",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
......
Regarding the new added user I still not get the reason behind it myself but I can tell you I had to use a second user and its working fine.
Normally you only use this user to log in into the container and nothing else. Under Linux execute these steps and verify the new user is in the docker usergroup
$ sudo groupadd docker
$ sudo usermod -aG docker <user>
$ sudo groups <user>

Set docker image username at container creation time?

I have an OpenSuse 42.3 docker image that I've configured to run a code. The image has a single user(other than root) called "myuser" that I create during the initial Image generation via the Dockerfile. I have three script files that generate a container from the image based on what operating system a user is on.
Question: Can the username "myuser" in the container be set to the username of the user that executes the container generation script?
My goal is to let a user pop into the container interactively and be able to run the code from within the container. The code is just a single binary that executes and has some IO, so I want the user's directory to be accessible from within the container so that they can navigate to a folder on their machine and run the code to generate output in their filesystem.
Below is what I have constructed so far. I tried setting the USER environment variable during the linux script's call to docker run, but that didn't change the user from "myuser" to say "bob" (the username on the host machine that started the container). The mounting of the directories seems to work fine. I'm not sure if it is even possible to achieve my goal.
Linux Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=${username} --workdir="/home/myuser" \
--volume="${home}:/home/myuser" ${imageName} /bin/bash \
Mac Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} \
--workdir="/home/myuser" \
--v="${home}:/home/myuser" ${imageName} /bin/bash \
Windows Container script:
ECHO OFF
SET imageName="myImage:ImageTag"
SET containerName="version1Image"
docker run -it -d --name %containerName% --workdir="/home/myuser" -v="%USERPROFILE%:/home/myuser" %imageName% /bin/bash
echo "Container %containerName% was created."
echo "Run the ./startWindowsLociStream script to launch container"
The below code has been checked into https://github.com/bmitch3020/run-as-user.
I would handle this in an entrypoint.sh that checks the ownership of /home/myuser and updates the uid/gid of the user inside your container. It can look something like:
#!/bin/sh
set -x
# get uid/gid
USER_UID=`ls -nd /home/myuser | cut -f3 -d' '`
USER_GID=`ls -nd /home/myuser | cut -f4 -d' '`
# get the current uid/gid of myuser
CUR_UID=`getent passwd myuser | cut -f3 -d: || true`
CUR_GID=`getent group myuser | cut -f3 -d: || true`
# if they don't match, adjust
if [ ! -z "$USER_GID" -a "$USER_GID" != "$CUR_GID" ]; then
groupmod -g ${USER_GID} myuser
fi
if [ ! -z "$USER_UID" -a "$USER_UID" != "$CUR_UID" ]; then
usermod -u ${USER_UID} myuser
# fix other permissions
find / -uid ${CUR_UID} -mount -exec chown ${USER_UID}.${USER_GID} {} \;
fi
# drop access to myuser and run cmd
exec gosu myuser "$#"
And here's some lines from a relevant Dockerfile:
FROM debian:9
ARG GOSU_VERSION=1.10
# run as root, let the entrypoint drop back to myuser
USER root
# install prereq debian packages
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
vim \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gosu
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod 755 /usr/local/bin/gosu \
&& gosu nobody true
RUN useradd -d /home/myuser -m myuser
WORKDIR /home/myuser
# entrypoint is used to update uid/gid and then run the users command
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD /bin/sh
Then to run it, you just need to mount /home/myuser as a volume and it will adjust permissions in the entrypoint. e.g.:
$ docker build -t run-as-user .
$ docker run -it --rm -v $(pwd):/home/myuser run-as-user /bin/bash
Inside that container you can run id and ls -l to see that you have access to /home/myuser files.
Usernames are not important. What is important are the uid and gid values.
User myuser inside your container will have a uid of 1000 (first non-root user id). Thus when you start your container and look at the container process from the host machine, you will see that the container is owned by whatever user having a uid of 1000 on the host machine.
You can override this by specifying the user once you run your container using:
docker run --user 1001 ...
Therefore if you want the user inside the container, to be able to access files on the host machine owned by a user having a uid of 1005 say, just run the container using --user 1005.
To better understand how users map between the container and host take a look at this wonderful article. https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
First of all (https://docs.docker.com/engine/reference/builder/#arg):
Warning: It is not recommended to use build-time variables for passing
secrets like github keys, user credentials etc. Build-time variable
values are visible to any user of the image with the docker history
command.
But if you still need to do this, read https://docs.docker.com/engine/reference/builder/#arg:
A Dockerfile may include one or more ARG instructions. For example,
the following is a valid Dockerfile:
FROM busybox
ARG user1
ARG buildno
...
and https://docs.docker.com/engine/reference/builder/#user:
The USER instruction sets the user name (or UID) and optionally the
user group (or GID) to use when running the image and for any RUN, CMD
and ENTRYPOINT instructions that follow it in the Dockerfile.
USER <user>[:<group>] or
USER <UID>[:<GID>]

Getting values from properties file using shell from a specific section

I'm trying to Get values from a properties file (ansible hosts file) using shell script from a specific section of the hosts file.
So I have this hosts file:
[windows]
myd-vm14945.company.net
myd-vm01431.company.net
[windows-web]
vmpweb314.company.net
[linux]
myd-vm11409.company.net
myd-vm14296.company.net
myd-vm20125.company.net
mydvm0091.company.net
And this script I want to run, when every server under Linux section should replace the parameter ${REMOTE_SERVER} in the shell script:
#add remote server to ansible host known_host file
ssh-keyscan ${REMOTE_SERVER}>> /root/.ssh/known_hosts
#remmber password
sshpass -p ROOT_PASSWORD ssh root#${REMOTE_SERVER}
So that the final result will be like that:
#add remote server to ansible host known_host file
ssh-keyscan myd-vm11409.company.net >> /root/.ssh/known_hosts
ssh-keyscan myd-vm14296.company.net>> /root/.ssh/known_hosts
#remmber password
sshpass -p ROOT_PASSWORD ssh root#myd-vm11409.company.net
sshpass -p ROOT_PASSWORD ssh root#myd-vm14296.company.net
And so on...for all the values under Linux.
If you really want to do it from bash, see the following awk magic, partially taken from Read certain key from certain section of ini file (sed/awk ?)
So you can create the following script, adjust for your inventory file and section and run it!
addkeys.sh
#!/bin/bash
INVENTORY="inventory.ini"
SECTION="[linux]"
I_HOSTS="$(awk -v section="$SECTION" ' # Enable a flag when the line is like your section
$0==section{ f=1; next } # For any lines with [ disable the flag
/\[/{ f=0; next } # If flag is set - print the line
f && $0' "$INVENTORY")"
for I_HOST in $I_HOSTS
do
#add remote server to ansible host known_host file
echo "ssh-keyscan "$I_HOST" >> /root/.ssh/known_hosts"
#remmber password
echo "sshpass -p ROOT_PASSWORD ssh "root#$I_HOST""
done
Results with echoed sshpass and keyscan commads:
ssh-keyscan myd-vm11409.company.net >> /root/.ssh/known_hosts
sshpass -p ROOT_PASSWORD ssh root#myd-vm11409.company.net
ssh-keyscan myd-vm14296.company.net >> /root/.ssh/known_hosts
sshpass -p ROOT_PASSWORD ssh root#myd-vm14296.company.net
ssh-keyscan myd-vm20125.company.net >> /root/.ssh/known_hosts
sshpass -p ROOT_PASSWORD ssh root#myd-vm20125.company.net
ssh-keyscan mydvm0091.company.net >> /root/.ssh/known_hosts
sshpass -p ROOT_PASSWORD ssh root#mydvm0091.company.net

Resources