How to run docker-compose in Azure Container Service and deploy to agent rather than master? - azure

I follow this article (https://blogs.msdn.microsoft.com/jcorioland/2016/04/25/create-a-docker-swarm-cluster-using-azure-container-service/#comment-1015) to setup a swarm docker host cluster. There are 1 master and 2 agents.The good points for this article is to use "-H 172.16.0.5:2375" which creates new containers on "agent" rather than "master" one.
My question is: if I want to make docker-compose.yml work with that, how could I do it? I have tried command like:
docker-compose -H 172.16.0.5:2375 up
But it doesn't work. If I just use:
docker-compose up
Then the containers will be created on master host and I couldn't even use public DNS to visit the website.
Here is the yml file I use for 1 magento & 1 mariadb containers:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '3306:3306'
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
ports:
- '80:80'
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
And this section is from my guess based on that article,
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
but yml doesn't like port appended, e.g.
environment:
- MAGENTO_HOST=172.16.0.5:2375
- MARIADB_HOST=172.16.0.5:2375
Thanks a lot!

Related

How to set configuration files using docker compose with Azure Container Instance

I have a docker compose file that I use for local development but I need to deploy this on Azure Containers. The docker compose I use on local is this:
version: "3.4"
services:
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
seq:
image: datalust/seq:latest
environment:
- ACCEPT_EULA=Y
ports:
- "80:80"
- "5341:5341"
And this one is working fine. Actually I could make Zipkin and Seq work with Azure, the problem is open telemetry. It needs a configuration file to work, so I did the follow:
Created an azure file storage
Added Opentelemetry yaml file into this storage
Changed Docker compose file as the follows to point this volume
version: "3.4"
services:
#zipkin here
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- mydata:/mounts/testvolumes
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
# seq here
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: testvolume
storage_account_name: storageqwricc
You can see in this image that everything is running but otel.
I'm almost sure that the problem is it can't find otel configuration file. The error that appears in Logs:
Error: Failed to start container otel-collector, Error response: to create containerd task: failed to create shim task: failed to create container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07: guest RPC failure: failed to create container: failed to run runc create/exec call for container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07 with exit status 1: container_linux.go:380: starting container process caused: exec: stat no such file or directory: unknown
And this is my azure file storage.
I've tried different paths. Checked permission file. Running without OTEL and it works as expected.
Also tried this configuration from other thread:
volumes:
- name: mydata
azureFile:
share_name: testvolume
readOnly: false
storageAccountName: storageqwricc
storageAccountKey: mysecretkey

Azure WebApps + Azure File share with multicontainers

I have been trying to setup my Postgresql db with my API. My API is on Azure WebApps and db on Azure fileshare.
The following is my docker compose file
version: '3.3'
services:
db:
image: postgres
volumes:
- db_path:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: dbuser
POSTGRES_PASSWORD: dbuser's_password
POSTGRES_DB: db_1
api:
depends_on:
- db
image: <my registry>.azurecr.io/api:latest
ports:
- "80:80"
restart: always
My WebApp -> Configuration -> Path mappings is below
It did get deployed but I couldn't find my database files in my Azure fileshare's location. Now as I am redeploying, I can see the following in the Log Stream.
Can somebody show me where did I do wrong? Why my DB is not in my Azure fileshare?
Thanks in advance
Edit:
Along with what Charles suggested as below:
I also have updated my docker-compose.yml file as below. The changes I made is I kept my volume name same as mapping name and added driver_opts:
version: '3.8'
services:
db:
image: mysql
volumes:
- mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE=dbName
- MYSQL_ROOT_PASSWORD=MyRootPassword!
- MYSQL_USER=dbUser_1
- MYSQL_PASSWORD=dbUser_1'sPassword
restart: always
api:
depends_on:
- db
entrypoint: ["./wait_for.sh", "db:3306", "-t", "3600", "--", "execute", "api"] #waiting very long enough to set the db server up and running
image: <my registry's url>/api:latest
ports:
- "80:80"
restart: always
volumes:
mysql:
driver: azure_file
driver_opts:
share_name: Azure_Share_Name
storage_account_name: Azure_Storage_Account_Name
storageaccountkey: Azure_Key
This is a known issue. When you mount using the persistent volume with Azure File Share, then the mount path will have the root owner and group, and you can't change it. So it the application needs the path with a special user, like this issue, the Postgresql needs the mount path /var/lib/postgresql/data having postgres owner, then it can't be achieved.
Note that the screenshot you provide with the path mapping shows the wrong mount path with the configuration in your YAML file. It may be a mistake.

Connecting to a neo4 driver (in a Docker container) from another Docker container

Normally I'd have an instance neo4j running in Docker, then in a script I access the driver like so:
self.driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("username", "password"))
I'm now putting this script itself into a Docker container, but I now get the error message:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address)
What uri (or other parameter) needs changing to access a neo4 Docker instance, from another docker container?
Within my docker-compose.yml, I have:
version: '3'
services:
neo4j:
container_name: neo4j
image: neo4j:3.5
restart: always
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=neo4j/start
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
appgui:
container_name: appgui
image: python:3.7.3-slim
build:
context: ./APPGUI/
volumes:
- ./APPGUI/:/usr/src/app/
restart: always
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 80:80
depends_on:
- neo4j
I also can't access my web app (http://localhost:5000)
Your service can't connect to localhost Neo4j, because it is inside a docker container, and localhost points to the docker containers instead of your local machine.
In this case, it is best to run both containers with docker-compose. You want to set the depends on feature in the other docker container. Here is an example docker-compose.yml file from my project.
version: '3.7'
services:
neo4j:
image: neo4j:4.1.2
restart: always
hostname: neo4jngs
container_name: neo4jngs
ports:
- 7474:7474
- 7687:7687
api:
build:
context: ./API
hostname: api
restart: always
container_name: api
ports:
- 3000:3000
depends_on:
- neo4j
As you can see, the api container is a service that will connect to Neo4j. Now you can change the driver settings to:
self.driver = GraphDatabase.driver(uri="bolt://neo4j:7687", auth=("username", "password"))
And you are good to go.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
However Tomaž Bratanič provided me with some helpful tips to get a resolve.

Using Docker compose and volumes to persist uploaded pictures directory

I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference

Connect nodejs App in one docker container to mongodb in another on Docker Swarm

I have setup a docker stack (see docker-compose.yml) below.
It has 2 Services:
- A nodejs (loopback) service running in one (or more) containers.
- A mongodb running in another container.
What HOST do I use in my node application to allow it to connect to the mongodb in another container (currently on same node, but could be on different node if I setup a swarm).
I have tried bridge, webnet, hosts IP etc.. but no luck.
Note: I am passing the host into the nodejs app with environment variable "MONGODB_SERVICE_SERVICE_HOST".
Thanks in advance!!
version: "3"
services:
web:
image: myaccount/loopback-app:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
ports:
- "8060:3000"
environment:
MONGODB_SERVICE_SERVICE_HOST: "webnet"
depends_on:
- mongo
networks:
- webnet
mongo-database:
image: mongo
ports:
- "27017:27017"
volumes:
- "/Users/jason/workspace/mongodb/db:/data/db"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
webnet is not the host. This is mongo-database.
So change webnet to mongo-database.
ENV MONGO_URL "mongodb://containerName:27017/dbName"
To check mongo-database communication, enter into the nodejs container, and try to ping mongo-database :
ping mongo-database
If it works, you know that your server can communicate with your mongo instance.

Resources