Trying to create ChirpStack Docker-Compose container in Azure cloud:
docker login azure
docker context create aci myacicontext
docker context use myacicontext
docker compose --file .\docker-compose.yml up
Got error:
cannot use ACI volume, required driver is “azure_file”, found “”
What I do wrong?
UPD
Content of docker-compose.yml:
version: "3"
services:
chirpstack-network-server:
image: chirpstack/chirpstack-network-server:3
volumes:
- ./configuration/chirpstack-network-server:/etc/chirpstack-network-server
chirpstack-application-server:
image: chirpstack/chirpstack-application-server:3
ports:
- 8080:8080
volumes:
- ./configuration/chirpstack-application-server:/etc/chirpstack-application-server
chirpstack-gateway-bridge:
image: chirpstack/chirpstack-gateway-bridge:3
ports:
- 1700:1700/udp
volumes:
- ./configuration/chirpstack-gateway-bridge:/etc/chirpstack-gateway-bridge
chirpstack-geolocation-server:
image: chirpstack/chirpstack-geolocation-server:3
volumes:
- ./configuration/chirpstack-geolocation-server:/etc/chirpstack-geolocation-server
postgresql:
image: postgres:9.6-alpine
environment:
- POSTGRES_PASSWORD=root
volumes:
- ./configuration/postgresql/initdb:/docker-entrypoint-initdb.d
- postgresqldata:/var/lib/postgresql/data
redis:
image: redis:5-alpine
volumes:
- redisdata:/data
mosquitto:
image: eclipse-mosquitto:2
ports:
- 1883:1883
volumes:
- ./configuration/eclipse-mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
volumes:
postgresqldata:
redisdata:
The error already shows you that it requires the driver azure_file for the volume when you use the Azure file share as the persistent volume. It should be like this:
volumes:
postgresqldata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
redisdata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
See more details about the File Share for the ACI through Docker Compose here.
Related
I have a docker compose file
version: "3.5"
services:
service1:
image: serv1image
container_name: service1
networks:
- local
depends_on:
- service2
- service3
service2:
image: serv2image
container_name: service2
networks:
- local
ports:
- "27017:27017"
service3:
image: serv3image
container_name: service3
networks:
- local
ports:
- "2000:2000"
service4:
image: serv4image
container_name: service4
networks:
- local
ports:
- "8090:8090"
depends_on:
- service2
- service3
networks:
local-network:
driver: bridge
name: local
I intially started service1 using
docker-compose -f docker-compose.yaml up --no-recreate -d service1
Obviously it started service 1 and also booted up service2 and service 3 as they are it's dependencies.
Now I had to start the service4 at a later stage of time. I started it using the following command.
docker-compose -f docker-compose.yaml up --no-recreate -d service4
But unfortunately this also tried to recreate the service 2 and service3 as they are its dependencies irrespective of mentioning --no-recreate flag and I got conflict error too stating
Cannot create container for service service4: Conflict. The container name "service2" is already in use by container
Why this is happening and is there any way to avoid this behaviour?
I'm attempting to deploy a PosgreSQL Docker container in Azure. To that end, I created in Azure a storage account and a file share to store a Docker volume.
Also, I created the Docker Azure context and set it as default.
To create the volume, I run:
volume create volpostgres --storage-account mystorageaccount
I can verify that the volume was created with docker volume ls.
ID DESCRIPTION
mystorageaccount/volpostgres Fileshare volpostgres in mystorageaccount storage account
But when I try to deploy with docker compose up, I get
could not find volume source "volpostgres"
This is the YAML file that does not work. How to fix it? how to point to the volume correctly?
version: '3.7'
services:
postgres:
image: postgres:13.1
container_name: cont_postgres
networks:
db:
ipv4_address: 22.225.124.121
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: xxxxx
volumes:
- volpostgres:/var/lib/postgresql/data
networks:
db:
driver: bridge
ipam:
driver: default
config:
- subnet: 22.225.124.121/24
volumes:
volpostgres:
name: mystorageaccount/volpostgres
You can follow the steps here. And the volumes part in the docker-compose file needs to be changed into this:
volumes:
volpostgres:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
My docker-compose file as below,
cassandra-db:
container_name: cassandra-db
image: cassandra:4.0-beta1
ports:
- "9042:9042"
restart: on-failure
volumes:
- ./out/cassandra_data:/var/lib/cassandra
environment:
- CASSANDRA_CLUSTER_NAME='cassandra-cluster'
- CASSANDRA_NUM_TOKENS=256
- CASSANDRA_RPC_ADDRESS=0.0.0.0
networks:
- my-network
client-service:
container_name: client-service
image: client-service
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- 8087:8087
links:
- cassandra-db
networks:
- my-network
networks:
my-network:
I use Datastax Java driver to connect cassandra in client service, which also runs inside docker.
CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(
InetSocketAddress.createUnresolved("cassandra-db",9042)))
.withKeyspace(CassandraConstant.KEY_SPACE_NAME.getValue())
.build()
I use DNS name to connect but not connected, i tried with Docker IP of cassandra container, and depends-on also.
Any issue with docker-compose file?
i am running application with multi containers as below..
feeder - is a simple nodejs container from image node:alpine
api - is nodejs container with expressjs from image node:alpine
ui-app - is react app container from image node:alpine
i am trying to call the api service in ui-app i am getting error as below
image to Console Log
image to Console Log
not sure what is causing the problem
if i access the services as http://192.168.99.100/ping it works (that is my docker machine default ip)...
but if i use container name like http://api:3200/ping it is not working...? please help..
the below is my docker-compose.
version: '3'
services:
feeder:
build: ./feeder
container_name: feeder
tty: true
depends_on:
- redis
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3100:3100"
networks:
- hmdanet
api:
build: ./api
container_name: api
tty: true
depends_on:
- feeder
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3200:3200"
networks:
hmdanet:
aliases:
- "hmda-api"
ui-app:
build: ./ui-app
container_name: ui-app
tty: true
depends_on:
- api
links:
- api
environment:
- IS_FROM_DOCKER=true
ports:
- "3000:3000"
networks:
- hmdanet
redis:
image: redis:latest
ports:
- '6379:6379'
networks:
- hmdanet
networks:
hmdanet:
driver: bridge
You can only use service name as a domain name when you are inside a container. In you case it's your browser making the call, it does not know what api is. In you web app, you should have an env like base url set to the ip of your docker machine or localhost.
How can I configure my digital ocean boxes to have the correct firewall settings?
I've followed the official guide for getting Digital Ocean and Docker containers working together.
I have 3 docker nodes that I can see when I docker-machine ls. I have created a master docker node and have joined the other docker nodes as workers. However, if I attempt to visit the url of the node, the connection hangs. This setup is working on local.
Here is my docker-compose that I am using for production.
version: "3"
services:
api:
image: "api"
command: rails server -b "0.0.0.0" -e production
depends_on:
- db
- redis
deploy:
replicas: 3
resources:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
ports:
- "3000:3000"
client:
image: "client"
depends_on:
- api
deploy:
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
- clientnet
ports:
- "4200:4200"
- "35730:35730"
db:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
env_file: .env-prod
image: mysql
ports:
- "3306:3306"
volumes:
- ~/.docker-volumes/app/mysql/data:/var/lib/mysql/data
redis:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
image: redis:alpine
ports:
- "6379:6379"
volumes:
- ~/.docker-volumes/app/redis/data:/var/lib/redis/data
nginx:
image: app_nginx
deploy:
restart_policy:
condition: on-failure
env_file: .env-prod
depends_on:
- client
- api
networks:
- apinet
- clientnet
ports:
- "80:80"
networks:
apinet:
driver: overlay
clientnet:
driver: overlay
I'm pretty confident that the problem is with the firewall settings. I'm not sure however, the ports that need to be open though. I've consulted this guide.