Azure Linux container on Web app environmental variables - azure

I have a configuration that runs a Node docker container on azure as a Webapp.
Everywhere i read it clearly says that App Settings ( environmental variables ) set to the Web app will be injected at runtime to the container, but it's clearly not the case for me.
"When your app runs, the App Service app settings are injected into
the process as environment variables automatically"
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#configure-environment-variables
Dockerfile:
FROM node
# Open SSH for Azure
RUN apt-get update && apt-get install -y ssh
RUN echo "root:Docker!" | chpasswd
RUN mkdir /run/sshd
COPY sshd_config /etc/ssh/
# Copy resources
WORKDIR /app
COPY package*.json ./
# Run install
RUN npm install
COPY . .
# Copy startup script and make it executable
COPY startup.sh /app
RUN ["chmod", "+x", "/app/startup.sh"]
startup.sh:
#!/bin/sh
echo "Start sshd"
/usr/sbin/sshd
echo "Start node"
node server.js
And still (as an example):
secret: process.env.ADDSEARCH_SECRET
And still process.env.ADDSEARCH_SECRET clearly returns null. As well as all the reset of the variables
What am I missing?
Using it with build_args during the build step of the docker image works, but I really don't want to do it that way for several reasons.

You need to prefix the environment variable with APPSETTING_, e.g. APPSETTING_ADDSEARCH_SECRET.

Related

Docker container bound to local volume doesn't update

I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?

how to run feedconsumers and consumers multiple for kafka in docker?

So I have this docker file and i want to run feed-consumers and consumers multiple times and i tried to do so. We have a node.js application for feed-consumers and consumer and pass user_levels to it.
I just want to ask is this the right approach?
FROM ubuntu:18.04
# Set Apt to noninteractive mode
ENV DEBIAN_FRONTEND noninteractive
# Install Helper Commands
ADD scripts/bin/* /usr/local/bin/
RUN chmod +x /usr/local/bin/*
RUN apt-install-and-clean curl \
build-essential \
git >> /dev/null 2>&1
RUN install-node-12.16.1
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
#RUN yarn init-cache
#RUN yarn init-temp
#RUN yarn init-user
RUN yarn install
RUN yarn build
RUN node ./feedsconsumer/consumer.js user_level=0
RUN for i in {1..10}; do node ./feedsconsumer/consumer.js user_level=1; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=2; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=3; done
RUN for i in {1..30}; do node ./feedsconsumer/consumer.js user_level=4; done
RUN for i in {1..40}; do node ./feedsconsumer/consumer.js user_level=5; done
RUN for i in {1..10}; do node ./consumer/consumer.js; done
ENTRYPOINT ["tail", "-f", "/dev/null"]
Or is there any other way around?
Thanks
A container runs exactly one process. Your container's is
ENTRYPOINT ["tail", "-f", "/dev/null"]
This translates to "do absolutely nothing, in a way that's hard to override". I typically recommend using CMD over ENTRYPOINT, and the main container command shouldn't ever be an artificial "do nothing but keep the container running" command.
Before that, you're trying to RUN the process(es) that are the main container process. The RUN only happens during the image build phase, the running process(es) aren't persisted in the image, the build will block until these processes complete, and they can't connect to other containers or data stores. These are the lines you want to be the CMD.
A container only runs one processes, but you can run multiple containers off the same image. It's somewhat easier to add parameters by setting environment variables than by adjusting the command line (you have to replace the whole thing), so in your code look for process.env.USER_LEVEL. Also make sure the process stays as a foreground process and doesn't use a package to daemonize itself.
Then the final part of the Dockerfile just needs to set a default CMD that launches one copy of your application:
...
COPY package.json yarn.lock .
RUN yarn install
COPY . .
RUN yarn build
CMD node ./feedsconsumer/consumer.js
Now you can start a single container running this process
docker build -t my/consumer .
docker run -d --name consumer my/consumer
And you can start multiple containers to run the whole set of them
for user_level in `seq 5`; do
for i in `seq 10`; do
docker run -d \
--name "feed-consumer-$user_level-$i" \
-e "USER_LEVEL=$user_level" \
my/consumer
done
done
for i in `seq 10`; do
docker run -d --name "consumer-$i" \
my/consumer \
node ./consumer/consumer.js
done
Notice this last invocation overrides the CMD to run the alternate script; this becomes a more contorted invocation if it needs to override ENTRYPOINT instead. (docker run --entrypoint node my/consumer ./consumer/consumer.js)
If you're looking forward to cluster environments like Kubernetes, it's often straightforward to run multiple identical copies of a container, which is what you're trying to do here. A Kubernetes Deployment object has a replicas: count, and you can kubectl scale deployment feed-consumer-5 --replicas=40 to change what's in the question, or potentially configure a HorizontalPodAutoscaler to set it dynamically based on the topic length (this last is involved, but possible and rewarding).

Azure Container generates error '503' reason 'site unavailable'

The issue
I have built a docker image on my local machine running windows 10
running docker-compose build, docker-compose up -dand docker-compse logs -f do generate expected results (no error)
the app runs correctly by running winpty docker container run -i -t -p 8000:8000 --rm altf1be.plotly.docker-compose:2019-12-17
I upload the docker image on a private Azure Container Registry
I deploy the web app based on the docker image Azure Portal > Container registry > Repositories > altf1be.plotly.docker-compose > v2019-12-17 > context-menu > deploy to web app
I run the web app and I get The service is unavailable
what is wrong with my method?
Thank you in advance for the time you will invest on this issue
docker-compose.yml
version: '3.7'
services:
twikey-plot_ly_service:
# container_name: altf1be.plotly.docker-container-name
build: .
image: altf1be.plotly.docker-compose:2019-12-17
command: gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
ports:
- 8000:8000
env_file: .env.staging
.env/staging
apiUrl=https://api.beta.alt-f1.be
authorizationUrl=/api/auth/authorization/code
serverUrl=https://dunningcashflow-api.alt-f1.be
transactionFeedUrl=/creditor/tx
api_token=ANICETOKEN
Dockerfile
# read the Dockerfile reference documentation
# https://docs.docker.com/engine/reference/builder
# build the docker
# docker build -t altf1be.plotly.docker-compose:2019-12-17.
# https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#use-a-docker-image-from-any-private-registry-optional
# Use the docker images used by Microsoft on Azure
FROM mcr.microsoft.com/oryx/python:3.7-20190712.5
LABEL Name=altf1.be/plotly Version=1.19.0
LABEL maintainer="abo+plotly_docker_staging#alt-f1.be"
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
# copy the code from the local drive to the docker
ADD . /code/
# non interactive front-end
ARG DEBIAN_FRONTEND=noninteractive
# update the software repository
ENV SSH_PASSWD 'root:!astrongpassword!'
RUN apt-get update && apt-get install -y \
apt-utils \
# enable SSH
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
RUN chmod u+x /code/init_container.sh
# update the python packages and libraries
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install -r requirements.txt
# copy sshd_config file. See https://man.openbsd.org/sshd_config
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222
ENV PORT 8000
ENV SSH_PORT 2222
# install dependencies
ENV ACCEPT_EULA=Y
ENV APPENGINE_INSTANCE_CLASS=F2
ENV apiUrl=https://api.beta.alt-f1.be
ENV serverUrl=https://dunningcashflow-api.alt-f1.be
ENV DOCKER_REGISTRY altf1be.azurecr.io
ENTRYPOINT ["/code/init_container.sh"]
/code/init_container.sh
gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
app/conf/gunicorn.conf.docker.staging.py
# -*- coding: utf-8 -*-
workers = 1
# print("workers: {}".format(workers))
bind = '0.0.0.0'
timeout = 600
log_level = "debug"
reload = True
print(
f"workers={workers} bind={bind} timeout={timeout} --log-level={log_level} --reload={reload}"
)
container settings
application settings
web app running - 'the service is unavailable'
Kudu - 'the service is unavailable'
Kudu - http pings on port 8000 (the app was not running)
ERROR - Container altf1be-plotly-docker_0_ee297002 for site altf1be-plotly-docker has exited, failing site start
ERROR - Container altf1be-plotly-docker_0_ee297002 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
What I can see is your use the wrong environment variable, it should be WEBSITES_PORT, you miss the s behind the WEBSITE. You can add it and try to deploy the image again.
And I think Azure Web App will not help you set the environment variables for you as you did in the docker-compose file with option env_file. So I suggest you create the image through command docker build and test it locally with setting the environment variables through -e. When the images run well, then you can push it to ACr and deploy it from the ACR also with the environment variables in Web App.
Or you can still use the docker-compose file to deploy your image in Web App, instead of the env_file and build with environment and image when the image is in ACR.

Dockerfile not running my mock api and reactjs app

Good Morning. I am trying to run the docker file to start my mock api and my UI.
When I run those inside individual terminals, I am able to see the UI up and running. But when I run those inside a docker container the API doesn't start for some reasons.
Can you help me with this?
# My Docker file.
FROM node:11
# Set working directory for API
RUN mkdir /usr/src/api
WORKDIR /usr/src/api
COPY ./YYY/. /usr/src/api/.
RUN npm install
RUN npm start &
# set working directory for UI
RUN mkdir /usr/src/app/
WORKDIR /usr/src/app/
COPY ./ZZZ/. /usr/src/app/.
ENV PATH /usr/src/app/node_modules/.bin:$PATH
EXPOSE 3000
RUN npm install
RUN npm start
Thanks,
Ranjith
The command npm start starts a web server that only listens on the loopback interface of the container. To fix this, in package.json, under start, add —host 0.0.0.0. This will allow you to access the app in your browser using the container ip.

Docker cannot run on build when running container with a different user

I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.

Resources