Docker volume binds file as directory on remote linux server - linux

I have docker installed on linux on remote server. I use ssh to connect to docker from my host machine with Windows 10. To deploy my app to docker, I use docker-compose v3, but I have serious problem. When I try to mount volume with file from my host machine, file converts to directory in my container.
Here's my docker-compose.yml:
version: "3.8"
services:
zeebe:
container_name: zeebe
image: camunda/zeebe
environment:
- ZEEBE_LOG_LEVEL=debug
ports:
- "26500:26500"
- "9600:9600"
- "5701:5701"
volumes:
- zeebe_data:/usr/local/zeebe/data
- type: bind
source: ./ZeebeServer/lib/application.yml
target: /usr/local/zeebe/config/application.yaml
read_only: true
depends_on:
- elasticsearch
networks:
- backend
operate:
container_name: operate
image: camunda/operate
ports:
- "8080:8080"
depends_on:
- zeebe
- elasticsearch
volumes:
- operate_data:/usr/local/operate
- type: bind
source: C:/Users/Aset/IdeaProjects/ABC-Store/ZeebeServer/operate/application.yml
target: /usr/local/operate/config/application.yml
read_only: true
networks:
- backend
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- "9200:9200"
environment:
- discovery.type=single-node
- cluster.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- zeebe_elasticsearch_data:/usr/share/elasticsearch/data
networks:
- backend
And this is my error after command docker-compose -d up:
ERROR: for zeebe Cannot start service zeebe: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/User/IdeaProjects/ABC-Store/ZeebeServer/lib/application.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged\\\" at \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged/usr/local/zeebe/config/application.yaml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

I understand that you are trying to mount a local folder from your Windows host (C:/Users/Aset/IdeaProjects/...) into a docker container.
But this is not possible (that easily) since docker can only mount folders local to the docker host i.e. your remote server you're connecting to via ssh.
So in order to mount the folder of your local Windows host into the container you'd first have to mount that folder somehow on the docker host/remote server (e.g. with a Samba share) or copy the contents of that folder to the remote server and edit your volume source to the path of the folder on the docker host.
P.S. in case you're wondering why this doesn't work but docker build with files from your Windows host does: docker build uploads the build context i.e. the current directory, to the docker host making its content available on the server.

Related

How to set configuration files using docker compose with Azure Container Instance

I have a docker compose file that I use for local development but I need to deploy this on Azure Containers. The docker compose I use on local is this:
version: "3.4"
services:
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
seq:
image: datalust/seq:latest
environment:
- ACCEPT_EULA=Y
ports:
- "80:80"
- "5341:5341"
And this one is working fine. Actually I could make Zipkin and Seq work with Azure, the problem is open telemetry. It needs a configuration file to work, so I did the follow:
Created an azure file storage
Added Opentelemetry yaml file into this storage
Changed Docker compose file as the follows to point this volume
version: "3.4"
services:
#zipkin here
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- mydata:/mounts/testvolumes
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
# seq here
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: testvolume
storage_account_name: storageqwricc
You can see in this image that everything is running but otel.
I'm almost sure that the problem is it can't find otel configuration file. The error that appears in Logs:
Error: Failed to start container otel-collector, Error response: to create containerd task: failed to create shim task: failed to create container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07: guest RPC failure: failed to create container: failed to run runc create/exec call for container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07 with exit status 1: container_linux.go:380: starting container process caused: exec: stat no such file or directory: unknown
And this is my azure file storage.
I've tried different paths. Checked permission file. Running without OTEL and it works as expected.
Also tried this configuration from other thread:
volumes:
- name: mydata
azureFile:
share_name: testvolume
readOnly: false
storageAccountName: storageqwricc
storageAccountKey: mysecretkey

Can't connect to Postgis running in docker from Geoserver running in another docker continer

I used kartoza's docker images for Geoserver and Postgis and started them in two docker containers using the provided docker-compose.yml:
version: '2.1'
volumes:
geoserver-data:
geo-db-data:
services:
db:
image: kartoza/postgis:12.0
volumes:
- geo-db-data:/var/lib/postgresql
ports:
- "25434:5432"
env_file:
- docker-env/db.env
restart: on-failure
healthcheck:
test: "exit 0"
geoserver:
image: kartoza/geoserver:2.17.0
volumes:
- geoserver-data:/opt/geoserver/data_dir
ports:
- "8600:8080"
restart: on-failure
env_file:
- docker-env/geoserver.env
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail -s http://localhost:8080/ || exit 1
interval: 1m30s
timeout: 10s
retries: 3
The referenced .env files are:
db.env
POSTGRES_DB=gis,gwc
POSTGRES_USER=docker
POSTGRES_PASS=docker
ALLOW_IP_RANGE=0.0.0.0/0
geoserver.env
GEOSERVER_DATA_DIR=/opt/geoserver/data_dir
ENABLE_JSONP=true
MAX_FILTER_RULES=20
OPTIMIZE_LINE_WIDTH=false
FOOTPRINTS_DATA_DIR=/opt/footprints_dir
GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc
GEOSERVER_ADMIN_PASSWORD=myawesomegeoserver
INITIAL_MEMORY=2G
MAXIMUM_MEMORY=4G
XFRAME_OPTIONS='false'
STABLE_EXTENSIONS=''
SAMPLE_DATA=false
GEOSERVER_CSRF_DISABLED=true
docker-compose up brings both containers up and running with no errors giving them names backend_db_1 (Postgis) and backend_geoserver_1 (Geoserver). I can access Geoserver running in backend_geoserver_1 under http://localhost:8600/geoserver/ as expected. I can connect an external, AWS-based Postgis as a data store to my docker-based Geoserver instance without any problems. I can also access the Postgis running in the docker container backend_db_1 from PgAdmin, with psql from the command line and from the Webstorm IDE.
However, if I try to use my Postgis running in backend_db_1 as a data store for my Geoserver running in backend_geoserver_1, I get the following error:
> Error creating data store, check the parameters. Error message: Unable
> to obtain connection: Cannot create PoolableConnectionFactory
> (Connection to localhost:25434 refused. Check that the hostname and
> port are correct and that the postmaster is accepting TCP/IP
> connections.)
So, my Geoserver in backend_geoserver_1 can connect to Postgis on AWS, but not to the one running in another docker container on the same localhost. The Postgis in backend_db_1 in its turn can be accessed from many other local apps and tools, but not from Geoserver running in a docker container.
Any ideas what I am missing? Thanks!
just add the network_mode in YAML in db and geoserver and set it to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network

Connect Linux Containers in Windows Docker Host to external network

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

How to mount docker volume in Azure Web App for containers?

I'm trying to run KrakenD image in Azure App Service.
KrakenD requires json config file krakend.json to be put into /etc/krakend/ (KrakenD image is based on Linux Alpine)
I created Web App for containers with the following docker-compose file:
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/site/krakend:/etc/krakend
ports:
- "8080:8080"
restart: always
Added storage account with a blob container where uploaded sample kraken.json file
In app configuration i added a path mapping like this:
But it looks like volume was not mounted correctly
2019-11-15 12:46:29.368 ERROR - Container create failed for
krakend_krakend_0_3032a936 with System.AggregateException, One or
more errors occurred. (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) InnerException:
Docker.DotNet.DockerApiException, Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"}
2019-11-15 12:46:29.369 ERROR - multi-container unit was not started
successfully
Additional questions
What does mean Mount path in Storage mounting? - i put there value /krankend
volume definition starts with ${WEBAPP_STORAGE_HOME} in docs they specified it as
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
so i did it by analogy and tried all 3 possible paths
${WEBAPP_STORAGE_HOME}/site/wwwroot/krakend
${WEBAPP_STORAGE_HOME}/site/krakend
${WEBAPP_STORAGE_HOME}/krakend
but no luck - still getting the error
ERROR parsing the configuration file: '/etc/krakend/krakend.json'
(open): no such file or directory
finally resolved that with the following docker-compose file
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- volume1:/etc/krakend
environment:
WEBSITES_ENABLE_APP_SERVICE_STORAGE: TRUE
ports:
- "8080:8080"
restart: always
where volume1 is a blob storage mounted as the following
This did not work for me. I was getting Bind mount must start with ${WEBAPP_STORAGE_HOME}.
This worked. docker-compose.yml
version: "3"
services:
backend:
image: xxxx
ports:
- 80:80
volumes:
- vol1:/var/www/html/public
volumes:
vol1:
driver: local
Volume definition:
Name: vol1
Config: basic
Storage accounts: ...
Storage container: ...
Mount paht: /var/www/html/public

Docker-compose access container with service name in python file?

I have a two container and How I can access another container in python file with django web server
Docker-compose.yml file
version: '2'
services:
web:
build: ./web/
command: python3 manage.py runserver 0.0.0.0:8001
volumes:
- ./web:/code
ports:
- "8001:80"
networks:
- dock_net
container_name: con_web
depends_on:
- "api"
links:
- api
api:
build: ./api/
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./api:/code
ports:
- "8000:80"
networks:
- dock_net
container_name: con_api
networks:
dock_net:
driver: bridge
Python File:
I can get mail_string from form
mail = request.POST.get('mail_string', '')
url = 'http://con_api:8000/api/'+mail+'/?format=json'
resp = requests.get(url=url)
return HttpResponse(resp.text)
I request api container and get value but I dont know ip address
Updated Answer
In your python file, you can use url = 'http://api/'+mail+'/?format=json'. This will enable you to access the url you are trying to get request from.
Original Answer
If the two containers are independent, then you can create a network and when you make both the containers a part of same network then you can access them using their hostname which you can specify by --hostname=HOSTNAME.
Another easy way is to use docker-compose file which creates a network by default and all the services declared in the file are a part of that network. By that you can access other services by their service name. Simply like http://container1/ when your docker-compose file is like this:
version: '3'
services:
container1:
image: some_image
container2:
image: another_or_same_image
Now enter into container2 by:
docker exec -it container2 /bin/bash
and run ping http://contianer1
You will receive packets and therefore be able to access other container.

Resources