letsencrypt with nginx blacklabelops on azure - azure

I successfully got https://github.com/blacklabelops/letsencrypt and the nginx running on my virtual machine on azure.
In Docker I hae now 2 containers. One with my go application running and being reachable on port 8080 on the web, one with the nginx container of blacklabelops. this one is binded to ports 80 and 443. I followed the tutorial "Letsencrypt and Nginx" on github for the steps 1-3 and replaced "http://yourserver" in step 3 with the url of my go application and port 8080 where I can reach it via http.
When I call https and the domain nothing happens. Ports 8080,443 and 80 are open in the azure network security group.
Can you give me a hint?
Update: I can post the commands _i performed here.
I have "my" application running on http://myapp.westeurope.cloudapp.azure.com:8080
I performed 1:
sudo docker run -d \
-p 80:80 \
-p 443:443 \
-e "SERVER1REVERSE_PROXY_LOCATION1=/" \
-e "SERVER1REVERSE_PROXY_PASS1=172.17.0.2" \
-e "SERVER1CERTIFICATE_DNAME=/CN=Chatbot/OU=Kundenservice/O=myapp.westeurope.cloudapp.azure.com/L=Frankfurt/C=DE" \
-e "SERVER1HTTPS_ENABLED=true" \
--name nginx \
blacklabelops/nginx
Than I performed 2:
sudo docker run --rm \
-p 80:80 \
-p 443:443 \
-v letsencrypt_certificates:/etc/letsencrypt \
-e "LETSENCRYPT_EMAIL=mymail#azure.com" \
-e "LETSENCRYPT_DOMAIN1=myapp.westeurope.cloudapp.azure.com" \
blacklabelops/letsencrypt install
I did 3:
sudo docker volume create letsencrypt_challenges
And 4:
-p 443:443 \
-p 80:80 \
-v letsencrypt_certificates:/etc/letsencrypt \
-v letsencrypt_challenges:/var/www/letsencrypt \
-e "NGINX_REDIRECT_PORT80=true" \
-e "SERVER1REVERSE_PROXY_LOCATION1=/" \
-e "SERVER1REVERSE_PROXY_PASS1=http://myapp.westeurope.cloudapp.azure.com" \
-e "SERVER1HTTPS_ENABLED=true" \
-e "SERVER1HTTP_ENABLED=true" \
-e "SERVER1LETSENCRYPT_CERTIFICATES=true" \
-e "SERVER1CERTIFICATE_FILE=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/fullchain.pem" \
-e "SERVER1CERTIFICATE_KEY=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/privkey.pem" \
-e "SERVER1CERTIFICATE_TRUSTED=/etc/letsencrypt/live/myapp.westeurope.cloudapp.azure.com/fullchain.pem" \
--name nginx \
blacklabelops/nginx

Related

Use iso image for virt-builder

the virt command works fine with the distributions from the list(debian-11 for example), but can I somehow use my own iso-file as a source? At the output, I need to have a qcow2 file with an OS and preinstalled commands like this:
virt-builder debian-11 \
--size 8G \
--output /var/lib/libvirt/images/gitlab-runner-base.qcow2 \
--format qcow2 \
--hostname gitlab-runner-bullseye \
--network \
--install curl \
--run-command 'curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | bash' \
--run-command 'curl -s "https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh" | bash' \
--run-command 'useradd -m -p "" gitlab-runner -s /bin/bash' \
--install gitlab-runner,git,git-lfs,openssh-server \
--run-command "git lfs install --skip-repo" \
--ssh-inject gitlab-runner:file:/root/.ssh/id_rsa.pub \
--run-command "echo 'gitlab-runner ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers" \
--run-command "sed -E 's/GRUB_CMDLINE_LINUX=\"\"/GRUB_CMDLINE_LINUX=\"net.ifnames=0 biosdevname=0\"/' -i /etc/default/grub" \
--run-command "grub-mkconfig -o /boot/grub/grub.cfg" \
--run-command "echo 'auto eth0' >> /etc/network/interfaces" \
--run-command "echo 'allow-hotplug eth0' >> /etc/network/interfaces" \
--run-command "echo 'iface eth0 inet dhcp' >> /etc/network/interfaces"

Google pay Api split revenue/profit?

Is there any api available where I can split revenue similar to stripe
# Create a PaymentIntent:
curl https://api.stripe.com/v1/payment_intents \
-u sk_test_4eC39HqLyjWDarjtT1zdp7dc: \
-d "amount"=10000 \
-d "currency"="usd" \
-d "transfer_group"="{ORDER10}"
# Create a Transfer to a connected account (later):
curl https://api.stripe.com/v1/transfers \
-u sk_test_4eC39HqLyjWDarjtT1zdp7dc: \
-d "amount"=7000 \
-d "currency"="usd" \
-d "destination"="{{CONNECTED_STRIPE_ACCOUNT_ID}}" \
-d "transfer_group"="{ORDER10}"
# Create a second Transfer to another connected account (later):
curl https://api.stripe.com/v1/transfers \
-u sk_test_4eC39HqLyjWDarjtT1zdp7dc: \
-d "amount"=2000 \
-d "currency"="usd" \
-d "destination"="{{OTHER_CONNECTED_STRIPE_ACCOUNT_ID}}" \
-d "transfer_group"="{ORDER10}"
The stripe fees is too expensive for me.
then I look at google pay, I found out that they don't have any fees. But they don't have any feature that can split revenue.
Question: Is there any payment processor that has split revenue/profit functionality with a cheap fees?

SonarQube already scanned projects disappearing after Azure container instance or Azure docker based app service restarts

I have created/utilized a docker file for sonarqube community edition provided by the docker hub. I have also added volumes accordingly in the docker file and set the ACI/App service restart policy as "NEVER", but still whenever ACI/App service restarts, there is no history for the already scanned projects (and signing in sonarqube azure container-based serverless instance always asks for the creation of new project -> creation of token all over again).
Could anyone help me troubleshoot this issue, Below is the docker file and sonar screenshot for reference
FROM alpine:3.11
ENV JAVA_VERSION="jdk-11.0.6+10" \
LANG='en_US.UTF-8' \
LANGUAGE='en_US:en' \
LC_ALL='en_US.UTF-8'
#
# glibc setup
#
RUN set -eux; \
apk add --no-cache --virtual .build-deps curl binutils; \
GLIBC_VER="2.31-r0"; \
ALPINE_GLIBC_REPO="https://github.com/sgerrand/alpine-pkg-glibc/releases/download"; \
GCC_LIBS_URL="https://archive.archlinux.org/packages/g/gcc-libs/gcc-libs-9.1.0-2-x86_64.pkg.tar.xz"; \
GCC_LIBS_SHA256="91dba90f3c20d32fcf7f1dbe91523653018aa0b8d2230b00f822f6722804cf08"; \
ZLIB_URL="https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.11-3-x86_64.pkg.tar.xz"; \
ZLIB_SHA256=17aede0b9f8baa789c5aa3f358fbf8c68a5f1228c5e6cba1a5dd34102ef4d4e5; \
curl -LfsS https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub; \
SGERRAND_RSA_SHA256="823b54589c93b02497f1ba4dc622eaef9c813e6b0f0ebbb2f771e32adf9f4ef2"; \
echo "${SGERRAND_RSA_SHA256} */etc/apk/keys/sgerrand.rsa.pub" | sha256sum -c -; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-${GLIBC_VER}.apk > /tmp/glibc-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk > /tmp/glibc-bin-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-bin-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-i18n-${GLIBC_VER}.apk > /tmp/glibc-i18n-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-i18n-${GLIBC_VER}.apk; \
/usr/glibc-compat/bin/localedef --force --inputfile POSIX --charmap UTF-8 "$LANG" || true; \
echo "export LANG=$LANG" > /etc/profile.d/locale.sh; \
curl -LfsS ${GCC_LIBS_URL} -o /tmp/gcc-libs.tar.xz; \
echo "${GCC_LIBS_SHA256} */tmp/gcc-libs.tar.xz" | sha256sum -c -; \
mkdir /tmp/gcc; \
tar -xf /tmp/gcc-libs.tar.xz -C /tmp/gcc; \
mv /tmp/gcc/usr/lib/libgcc* /tmp/gcc/usr/lib/libstdc++* /usr/glibc-compat/lib; \
strip /usr/glibc-compat/lib/libgcc_s.so.* /usr/glibc-compat/lib/libstdc++.so*; \
curl -LfsS ${ZLIB_URL} -o /tmp/libz.tar.xz; \
echo "${ZLIB_SHA256} */tmp/libz.tar.xz" | sha256sum -c -; \
mkdir /tmp/libz; \
tar -xf /tmp/libz.tar.xz -C /tmp/libz; \
mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib; \
apk del --purge .build-deps glibc-i18n; \
rm -rf /tmp/*.apk /tmp/gcc /tmp/gcc-libs.tar.xz /tmp/libz /tmp/libz.tar.xz /var/cache/apk/*;
#
# AdoptOpenJDK/openjdk11 setup
#
RUN set -eux; \
apk add --no-cache --virtual .fetch-deps curl; \
ARCH="$(apk --print-arch)"; \
case "${ARCH}" in \
aarch64|arm64) \
ESUM='7ed04ed9ed7271528e7f03490f1fd7dfbbc2d391414bd6fe4dd80ec3bad76d30'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_aarch64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
ppc64el|ppc64le) \
ESUM='49231f2c36487b53141ade3f7eb291e2855138b14b1129f9acf435ea9cc0e899'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_ppc64le_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
s390x) \
ESUM='bcb3f46cbad742b08c81e922e313549c029f436ac7d91ef3c9bed8e4049d67d2'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_s390x_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='c5a4e69e2be0e3e5f5bb7c759960b20650967d0f571baad4a7f15b2c03bda352'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_x64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
apk del --purge .fetch-deps; \
rm -rf /var/cache/apk/*; \
rm -rf /tmp/openjdk.tar.gz;
#
# SonarQube setup
#
ARG SONARQUBE_VERSION=8.4.0.35506
ARG SONARQUBE_ZIP_URL=https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-${SONARQUBE_VERSION}.zip
ENV JAVA_HOME=/opt/java/openjdk \
PATH="/opt/java/openjdk/bin:$PATH" \
SONARQUBE_HOME=/opt/sonarqube \
SONAR_VERSION="${SONARQUBE_VERSION}" \
SQ_DATA_DIR="/opt/sonarqube/data" \
SQ_EXTENSIONS_DIR="/opt/sonarqube/extensions" \
SQ_LOGS_DIR="/opt/sonarqube/logs" \
SQ_TEMP_DIR="/opt/sonarqube/temp"
RUN set -ex \
&& addgroup -S -g 1000 sonarqube \
&& adduser -S -D -u 1000 -G sonarqube sonarqube \
&& apk add --no-cache --virtual build-dependencies gnupg unzip curl \
&& apk add --no-cache bash su-exec ttf-dejavu \
# pub 2048R/D26468DE 2015-05-25
# Key fingerprint = F118 2E81 C792 9289 21DB CAB4 CFCA 4A29 D264 68DE
# uid sonarsource_deployer (Sonarsource Deployer) <infra#sonarsource.com>
# sub 2048R/06855C1D 2015-05-25
&& sed --in-place --expression="s?securerandom.source=file:/dev/random?securerandom.source=file:/dev/urandom?g" "${JAVA_HOME}/conf/security/java.security" \
&& for server in $(shuf -e ha.pool.sks-keyservers.net \
hkp://p80.pool.sks-keyservers.net:80 \
keyserver.ubuntu.com \
hkp://keyserver.ubuntu.com:80 \
pgp.mit.edu) ; do \
gpg --batch --keyserver "${server}" --recv-keys F1182E81C792928921DBCAB4CFCA4A29D26468DE && break || : ; \
done \
&& mkdir --parents /opt \
&& cd /opt \
&& curl --fail --location --output sonarqube.zip --silent --show-error "${SONARQUBE_ZIP_URL}" \
&& curl --fail --location --output sonarqube.zip.asc --silent --show-error "${SONARQUBE_ZIP_URL}.asc" \
&& gpg --batch --verify sonarqube.zip.asc sonarqube.zip \
&& unzip -q sonarqube.zip \
&& mv "sonarqube-${SONARQUBE_VERSION}" sonarqube \
&& rm sonarqube.zip* \
&& rm -rf ${SONARQUBE_HOME}/bin/* \
&& chown -R sonarqube:sonarqube ${SONARQUBE_HOME} \
# this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
&& chmod -R 777 "${SQ_DATA_DIR}" "${SQ_EXTENSIONS_DIR}" "${SQ_LOGS_DIR}" "${SQ_TEMP_DIR}" \
&& apk del --purge build-dependencies
COPY --chown=sonarqube:sonarqube run.sh sonar.sh ${SONARQUBE_HOME}/bin/
VOLUME ["/opt/sonarqube/data","/opt/sonarqube/extensions","/opt/sonarqube/logs","/opt/sonarqube/temp"]
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
#These steps are added to give the desired access to the *sh files to execute in the docker web app service server.
RUN chmod 755 ./bin/run.sh
RUN chmod 755 ./bin/sonar.sh
ENTRYPOINT ["bin/run.sh"]
CMD ["bin/sonar.sh"]
SonarQubeServer:
enter image description here
According to the documentation: "By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store."
The documentation explains how to mount an Azure file share in Azure Container Instances.

Docker within Docker container not mapping .ssh volume

Im trying to map a volume from my main host machine, into a docker container, which then creates another docker container.
my first container is created as per the below:
docker run --rm -it \
-e JOB="release" \
-e LOG_FOLDER="$logDirectory" \
-v "$logDirectory":"$logDirectory" \
-e TEMP_FOLDER="$tempFolder" \
-v ~/.ssh:/root/.ssh \
-v "$tempFolder":"$tempFolder" \
-v /var/run/docker.sock:/var/run/docker.sock \
release
The above works great, and when I /bin/bash into this container I can see that the .ssh folder has mapped and is showing the contents from my host machine.
But then when I try to create ANOTHER docker container within this one, using the below:
docker run --rm -it \
-e JOB=summary \
-e TEMP_FOLDER="$TEMP_FOLDER" \
-v "$TEMP_FOLDER":"$TEMP_FOLDER" \
-v ~/.ssh:/root/.ssh \
-v /var/run/docker.sock:/var/run/docker.sock \
summary /bin/bash
The container is created with no issues, but the .ssh folder content hasn't been mapped. However the TEMP_FOLDER has been mapped correctly and is showing the content from the host machine, I dont know why the .ssh folder isn't doing the same?
Is there a permission problem?
Not sure why but the work around ive found is below, perhaps the root user was not the same across containers.
docker run --rm -it \
-e JOB="release" \
-e LOG_FOLDER="$logDirectory" \
-v "$logDirectory":"$logDirectory" \
-e TEMP_FOLDER="$tempFolder" \
-v ~/.ssh:/root/.ssh \
-v ~/.ssh:/home/user/.ssh \
-v /home/user/.ssh:/root/.ssh \
-v "$tempFolder":"$tempFolder" \
-v /var/run/docker.sock:/var/run/docker.sock \
release
docker run --rm -it \
-e JOB=summary \
-e TEMP_FOLDER="$TEMP_FOLDER" \
-v "$TEMP_FOLDER":"$TEMP_FOLDER" \
-v /home/user/.ssh:/root/.ssh \
-v /var/run/docker.sock:/var/run/docker.sock \
summary /bin/bash

How to run reaction commerce in the background forever?

I am using reaction commerce https://github.com/reactioncommerce/reaction.
I tried reaction &. It will eventually die.
How do I run reaction commence in background forever?
by deploying
or in short, build a docker image:
docker build --build-arg TOOL_NODE_FLAGS="--max-old-space-size=2048" -t mycustom .
then run it:
docker run -d \
-p 80:3000 \
-e ROOT_URL="http://<your app url>" \
-e MONGO_URL="mongodb://<your mongo url>" \
-e REACTION_EMAIL="youradmin#yourdomain.com" \
-e REACTION_USER="admin-username" \
-e REACTION_AUTH="admin-password" \
mydockerhubuser/mycustom:mytag

Resources