Healthcheck for Cosmodb emulator in docker - azure

I've got a docker-compose file with following container defined:
cosmosdb:
build:
context: .
dockerfile: Dockerfile.cosmosdb
environment:
- NODE_TLS_REJECT_UNAUTHORIZED=0
- AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=xxx.xxx.xxx.xxx
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
networks:
my-network:
ipv4_address: xxx.xxx.xxx.xxx
And a Dockerfile:
FROM mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
RUN apt update && apt upgrade -y
RUN apt install curl -y
CMD /usr/local/bin/cosmos/start.sh
My question is, what is a good way to add a healthcheck here?

Related

Docker compose connection refused

I am getting a 'connection refused' error when trying to hit my NodeJS server running in a Docker container. If I try to cURL the server from the host machine, I get the error "curl: (56) Recv failure: Connection reset by peer".
When I run sudo docker-compose up -d it starts up my services and running sudo docker ps -a shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e56b30b1de9c mongo:4.1.8-xenial "docker-entrypoint.s…" About a minute ago Up About a minute 27017/tcp db
0d82b0a881e5 nodejs "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp nodejs
The containers are running on a server with IP '192.168.0.24' so I try to hit an endpoint in my 'nodejs' app via '192.168.0.24:3000/items' but this results in a connection refused error.
docker-compose.yml
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
volumes:
- .:/home/node/app
- /home/node/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Dockerfile
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 3000
CMD [ "npm", "app.js" ]

Unable to authenticate to company LDAP using flask-ldap3-login in Docker container

I`m trying to connect to my company's LDAP server to authenticate users in my flask web app. I'm constantly getting this error:
2020-06-22 09:55:07,459 ERROR flask_ldap3_login MainThread : no active server available in server pool after maximum number of tries
I also tried to telnet to the ldap server from the web container and not connection can be made. What do I need to do to allow my containers to run on our network to be able to access LDAP?
I tried enabling SSL and added the certs, but still no success.
docker-compose file
# docker-compose.yml
version: '3'
services:
db:
build: ./application/db
container_name: dqm_db
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
container_name: dqm_web
restart: always
ports:
- 5000:5000
- 389:389
- 636:636
env_file:
- .env
depends_on:
- db
links:
- redis
volumes:
- .:/data-quality-management
nginx:
build: ./nginx
container_name: dqm_nginx
restart: always
ports:
- 80:80
depends_on:
- web
redis:
container_name: dqm_redis
env_file:
- .env
image: redis:latest
restart: always
command: redis-server
ports:
- 6379:6379
volumes:
- .:/data-quality-management
worker:
build: .
hostname: worker
container_name: dqm_worker
entrypoint: celery
command: -A application.run_celery:celery worker --loglevel=info
links:
- redis
- web
depends_on:
- web
- redis
env_file:
- .env
volumes:
- .:/data-quality-management
volumes:
postgres_data:
Dockerfile:
FROM python:3.7-buster
RUN apt-get update
RUN apt-get install python-dev -y
RUN apt-get install libsasl2-dev -y
RUN apt-get install libldap2-dev -y
RUN apt-get install libssl-dev -y
RUN apt-get clean -y
WORKDIR /data-quality-management
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
EXPOSE 5000
EXPOSE 389
EXPOSE 636
COPY *.crt /etc/ssl/certs/
RUN update-ca-certificates
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /data-quality-management
CMD gunicorn -w $WEB_CONCURRENCY -b $WEB_BIND wsgi:app

Flask App runs locally but errors out on DigitalOcean

The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.

Time out Error on build in Gitlab

I am using the following .gitlab-ci.yml file:
stages:
- build
build:
image: php:7.2-apache
stage: build
tags:
- runner01
script:
- echo -e "\n Build da aplicação. \n"
before_script:
- apt-get update -yqq
- apt-get install git -yqq
- docker-php-ext-install mysqli
- curl -sS https://getcomposer.org/installer | php
- php composer.phar install
I installed Gitlab using Docker but when cloning the repository it gives the error below. I could not find anything to help me solve this problem.
My docker-compose file:
version: '3'
services:
Gitlab_CI:
container_name: Gitlab_CI
image: 'gitlab/gitlab-ce:latest'
networks:
- 'DockerLAN'
restart: always
hostname: 'gitlab.docker'
ports:
- '81:80'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Gitlab_Runner:
container_name: Gitlab_Runner
image: 'gitlab/gitlab-runner:latest'
networks:
- 'DockerLAN'
restart: always
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- '/srv/gitlab-runner/config:/etc/gitlab-runner'
networks:
DockerLAN:
driver: bridge

docker-compose up didn't finish npm install.

I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.

Resources