Docker within Docker container not mapping .ssh volume - linux

Im trying to map a volume from my main host machine, into a docker container, which then creates another docker container.
my first container is created as per the below:
docker run --rm -it \
-e JOB="release" \
-e LOG_FOLDER="$logDirectory" \
-v "$logDirectory":"$logDirectory" \
-e TEMP_FOLDER="$tempFolder" \
-v ~/.ssh:/root/.ssh \
-v "$tempFolder":"$tempFolder" \
-v /var/run/docker.sock:/var/run/docker.sock \
release
The above works great, and when I /bin/bash into this container I can see that the .ssh folder has mapped and is showing the contents from my host machine.
But then when I try to create ANOTHER docker container within this one, using the below:
docker run --rm -it \
-e JOB=summary \
-e TEMP_FOLDER="$TEMP_FOLDER" \
-v "$TEMP_FOLDER":"$TEMP_FOLDER" \
-v ~/.ssh:/root/.ssh \
-v /var/run/docker.sock:/var/run/docker.sock \
summary /bin/bash
The container is created with no issues, but the .ssh folder content hasn't been mapped. However the TEMP_FOLDER has been mapped correctly and is showing the content from the host machine, I dont know why the .ssh folder isn't doing the same?
Is there a permission problem?

Not sure why but the work around ive found is below, perhaps the root user was not the same across containers.
docker run --rm -it \
-e JOB="release" \
-e LOG_FOLDER="$logDirectory" \
-v "$logDirectory":"$logDirectory" \
-e TEMP_FOLDER="$tempFolder" \
-v ~/.ssh:/root/.ssh \
-v ~/.ssh:/home/user/.ssh \
-v /home/user/.ssh:/root/.ssh \
-v "$tempFolder":"$tempFolder" \
-v /var/run/docker.sock:/var/run/docker.sock \
release
docker run --rm -it \
-e JOB=summary \
-e TEMP_FOLDER="$TEMP_FOLDER" \
-v "$TEMP_FOLDER":"$TEMP_FOLDER" \
-v /home/user/.ssh:/root/.ssh \
-v /var/run/docker.sock:/var/run/docker.sock \
summary /bin/bash

Related

Use iso image for virt-builder

the virt command works fine with the distributions from the list(debian-11 for example), but can I somehow use my own iso-file as a source? At the output, I need to have a qcow2 file with an OS and preinstalled commands like this:
virt-builder debian-11 \
--size 8G \
--output /var/lib/libvirt/images/gitlab-runner-base.qcow2 \
--format qcow2 \
--hostname gitlab-runner-bullseye \
--network \
--install curl \
--run-command 'curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | bash' \
--run-command 'curl -s "https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh" | bash' \
--run-command 'useradd -m -p "" gitlab-runner -s /bin/bash' \
--install gitlab-runner,git,git-lfs,openssh-server \
--run-command "git lfs install --skip-repo" \
--ssh-inject gitlab-runner:file:/root/.ssh/id_rsa.pub \
--run-command "echo 'gitlab-runner ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers" \
--run-command "sed -E 's/GRUB_CMDLINE_LINUX=\"\"/GRUB_CMDLINE_LINUX=\"net.ifnames=0 biosdevname=0\"/' -i /etc/default/grub" \
--run-command "grub-mkconfig -o /boot/grub/grub.cfg" \
--run-command "echo 'auto eth0' >> /etc/network/interfaces" \
--run-command "echo 'allow-hotplug eth0' >> /etc/network/interfaces" \
--run-command "echo 'iface eth0 inet dhcp' >> /etc/network/interfaces"

when I execute the docker command I get "invalid reference format" error

How can I fix this problem?
docker: invalid reference format.
I have executed this command:
docker run -d --rm -p 3000:8000 -env port=8000 --name feedback-app -v feedback:/app/feedback -v "c:/workspace/d/data-volumes-07-added-dockerignore/data-volumes-07-added-dockerignore:/app:ro" -v /app/node_modules -v /app/temp feedback-node:env
I had forgotten something in my command!
I changed -env to --env
docker run -d --rm -p 3000:8000 --env port=8000 --name feedback-app -v feedback:/app/feedback -v "c:/workspace/d/data-volumes-07-added-dockerignore/data-volumes-07-added-dockerignore:/app:ro" -v /app/node_modules -v /app/temp feedback-node:env

letsencrypt with nginx blacklabelops on azure

I successfully got https://github.com/blacklabelops/letsencrypt and the nginx running on my virtual machine on azure.
In Docker I hae now 2 containers. One with my go application running and being reachable on port 8080 on the web, one with the nginx container of blacklabelops. this one is binded to ports 80 and 443. I followed the tutorial "Letsencrypt and Nginx" on github for the steps 1-3 and replaced "http://yourserver" in step 3 with the url of my go application and port 8080 where I can reach it via http.
When I call https and the domain nothing happens. Ports 8080,443 and 80 are open in the azure network security group.
Can you give me a hint?
Update: I can post the commands _i performed here.
I have "my" application running on http://myapp.westeurope.cloudapp.azure.com:8080
I performed 1:
sudo docker run -d \
-p 80:80 \
-p 443:443 \
-e "SERVER1REVERSE_PROXY_LOCATION1=/" \
-e "SERVER1REVERSE_PROXY_PASS1=172.17.0.2" \
-e "SERVER1CERTIFICATE_DNAME=/CN=Chatbot/OU=Kundenservice/O=myapp.westeurope.cloudapp.azure.com/L=Frankfurt/C=DE" \
-e "SERVER1HTTPS_ENABLED=true" \
--name nginx \
blacklabelops/nginx
Than I performed 2:
sudo docker run --rm \
-p 80:80 \
-p 443:443 \
-v letsencrypt_certificates:/etc/letsencrypt \
-e "LETSENCRYPT_EMAIL=mymail#azure.com" \
-e "LETSENCRYPT_DOMAIN1=myapp.westeurope.cloudapp.azure.com" \
blacklabelops/letsencrypt install
I did 3:
sudo docker volume create letsencrypt_challenges
And 4:
-p 443:443 \
-p 80:80 \
-v letsencrypt_certificates:/etc/letsencrypt \
-v letsencrypt_challenges:/var/www/letsencrypt \
-e "NGINX_REDIRECT_PORT80=true" \
-e "SERVER1REVERSE_PROXY_LOCATION1=/" \
-e "SERVER1REVERSE_PROXY_PASS1=http://myapp.westeurope.cloudapp.azure.com" \
-e "SERVER1HTTPS_ENABLED=true" \
-e "SERVER1HTTP_ENABLED=true" \
-e "SERVER1LETSENCRYPT_CERTIFICATES=true" \
-e "SERVER1CERTIFICATE_FILE=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/fullchain.pem" \
-e "SERVER1CERTIFICATE_KEY=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/privkey.pem" \
-e "SERVER1CERTIFICATE_TRUSTED=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/fullchain.pem" \
--name nginx \
blacklabelops/nginx

How to run reaction commerce in the background forever?

I am using reaction commerce https://github.com/reactioncommerce/reaction.
I tried reaction &. It will eventually die.
How do I run reaction commence in background forever?
by deploying
or in short, build a docker image:
docker build --build-arg TOOL_NODE_FLAGS="--max-old-space-size=2048" -t mycustom .
then run it:
docker run -d \
-p 80:3000 \
-e ROOT_URL="http://<your app url>" \
-e MONGO_URL="mongodb://<your mongo url>" \
-e REACTION_EMAIL="youradmin#yourdomain.com" \
-e REACTION_USER="admin-username" \
-e REACTION_AUTH="admin-password" \
mydockerhubuser/mycustom:mytag

Internal cmake error while building trilinos on ubuntu 32bit

I'm trying to build the library trilinos on a 32bit ubuntu virtual machine. I wrote the following configuration script:
cmake \
-D CMAKE_INSTALL_PREFIX:FILEPATH=./ \
-D Trilinos_ENABLE_ALL_OPTIONAL_PACKAGES:BOOL=OFF \
-D Trilinos_ENABLE_Anasazi:BOOL=ON \
-D Trilinos_ENABLE_Epetra:BOOL=ON \
-D Trilinos_ENABLE_EpetraEXt:BOOL=ON \
-D Trilinos_ENABLE_Triutils:BOOL=ON \
-D Trilinos_ENABLE_Belos:BOOL=ON \
-D Trilinos_ENABLE_Ifpack:BOOL=ON \
-D Trilinos_ENABLE_TESTS:BOOL=ON \
-D TPL_BLAS_LIBRARIES=/usr/lib/libblas.so.3 \
-D TPL_LAPACK_LIBRARIES=/usr/lib/liblapack.so.3 \
-D CMACKE_VERBOSE_MAKEFILE:BOOL=ON \
-D Trilinos_ENABLE_DEBUG:BOOL=ON \
-D CMACK_BUILD_TYPE:STRING=DEBUG \
-D Trilinos_ENABLE_EXPLICIT_INSTANTIATION:BOOL=ON \
../
When I execute it with the ksh command in the terminal, I get the following error:
CMake Error: CMAKE_Fortran_Compiler not set, after EnableLanguage
It appears you do not have a Fortran compiler installed. This is why cmake cannot set CMAKE_Fortran_Compiler on its own, and requests you to manually specify one.
Since you are using Ubuntu, I would recommend using gfortran from the GCC suite. If you install the compiler from the repository, cmake should be fine.
You can install the compiler using
sudo apt-get install gfortran

Resources