I haven't find the answer elsewhere, so I try to ask here. Hope it's not silly question for I'm new both to docker compose and Azure.
I'm using docker compose for deploying Web App (actually bookstack) into Azure. The stack uses few volumes. Everything works just fine, volumes are persistent. Only I'm not able to locate volumes used inside Azure portal. I really need to be able to access these in order to backup or maybe migrate them.
version: '2'
services:
mysql:
image: mysql:5.7.21
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
volumes:
- mysql-data:/var/lib/mysql
bookstack:
image: solidnerd/bookstack:latest
depends_on:
- mysql
environment:
- DB_HOST=
- DB_DATABASE=
- DB_USERNAME=
- DB_PASSWORD=
volumes:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/public/storage
ports:
- "8088:80"
volumes:
mysql-data:
uploads:
storage-uploads:
Thanks in advance!
Jakub
You just need to change these lines to use the placeholder variable:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/public/storage
as follows:
- ${WEBAPP_STORAGE_HOME}/uploads:/var/www/bookstack/public/uploads
- ${WEBAPP_STORAGE_HOME}/storage-uploads:/var/www/bookstack/public/storage
You may want to take a look at the FTP file structure to determine where you want to map the files to :)
Related
I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.
I am new to AWS and new with Traefik too. I am having a lot of trouble trying to remove the need of port especification in a application I am developing.
When I hit http://api-landingpage.cayama.com.br/ it give`s me an 404 page error, but when I try http://api-landingpage.cayama.com.br:8001/ it goes to my api correctly.
I hosted my domain in AWS Route53 and I am using docker as a provider.
Here my configurations:
docker-compose.yml:
version: "3"
services:
app:
build: .
ports:
- "8001:8001"
command: "npm start"
docker-production.yml:
version: "3"
services:
traefik:
image: traefik
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`http://api-landingpage.cayama.com.br/`)"
- "traefik.http.routers.app.entrypoints=web"
I am sure is there a basic thing that I am missing here, can anyone help me please?
I just want to not have to specify, on URL, the port which my application is running.
Thanks guys!
Theoretically, as you said, you shouldn't have to specify the port manually.
I'm not totally sure it's the cause but you are using a full URL instead of a host.
Basically you should replace this:
- "traefik.http.routers.app.rule=Host(`http://api-landingpage.cayama.com.br/`)"
With this:
- "traefik.http.routers.app.rule=Host(`api-landingpage.cayama.com.br`)"
If it does not solve your problem you could try using the loadbalancer directive, even if it is theoretically usefull for Docker Swarm, not for Docker (put this in your app service):
- "traefik.http.services.app.loadbalancer.server.port=8001"
Then if it's still not working enable debugging and look for errors in the logs.
In order to enable debugging, add this to your Traefik service in the command section:
- --log.level=DEBUG
Have you tried:
- "traefik.http.services.app.loadbalancer.server.port=8001"
I cloned a Django+Node.js open-source project, the goal of which is to upload and annotate text documents, and save the annotations in a Postgres db. This project has stack files for docker-compose, both for Django dev and production setups. Both these stack files work completely fine out of the box, with a Postgres database.
Now I would like to upload this project to Google Cloud - as my first ever containerized application. As a first step, I simply want to move the persistent storage to Cloud SQL instead of the included Postgres image in the stack file. My stack-file (Django dev) looks as follows
version: "3.7"
services:
backend:
image: python:3.6
volumes:
- .:/src
- venv:/src/venv
command: ["/src/app/tools/dev-django.sh", "0.0.0.0:8000"]
environment:
ADMIN_USERNAME: "admin"
ADMIN_PASSWORD: "${DJANGO_ADMIN_PASSWORD}"
ADMIN_EMAIL: "admin#example.com"
# DATABASE_URL: "postgres://doccano:doccano#postgres:5432/doccano?sslmode=disable"
DATABASE_URL: "postgres://${CLOUDSQL_USER}:${CLOUDSQL_PASSWORD}#sql_proxy:5432/postgres?sslmode=disable"
ALLOW_SIGNUP: "False"
DEBUG: "True"
ports:
- 8000:8000
depends_on:
- sql_proxy
networks:
- network-overall
frontend:
image: node:13.7.0
command: ["/src/frontend/dev-nuxt.sh"]
volumes:
- .:/src
- node_modules:/src/frontend/node_modules
ports:
- 3000:3000
depends_on:
- backend
networks:
- network-overall
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command:
- "/cloud_sql_proxy"
- "-dir=/cloudsql"
- "-instances=${CLOUDSQL_CONNECTION_NAME}=tcp:0.0.0.0:5432"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ${GCP_KEY_PATH}:/root/keys/keyfile.json:ro
- cloudsql:/cloudsql
networks:
- network-overall
volumes:
node_modules:
venv:
cloudsql:
networks:
network-overall:
I have a bunch of models, e.g. project in the Django backend, which I can view, modify, add and delete using Django admin interface, but while trying to access them through Node.js views I get a 403 Forbidden error. This is the case of all my Django models.
For reference, in the above stack file, I have listed the only difference from the originally cloned Docker-compose stack file, where the DATABASE_URL used to point to a local Postgres Docker image, as follows
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: "doccano"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "doccano"
networks:
- network-backend
To check if my GCP keys are correct, I tried to deploy the Cloud SQL Proxy container alone and interact with it (add, remove and update rows in included tables), and that was possible. However, the fact that I can use the Django admin interface successfully in the deployed Docker-compose stack should already prove that things are ok with the Cloud SQL proxy.
I'm not an experienced Node.js developer by any means, and have a little experience with Django and Django admin. My intention behind using a Docker-compose setup was that I will not have to bother with the intricacies of js views, and only have to deal with the Python business logic.
I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
I follow this article (https://blogs.msdn.microsoft.com/jcorioland/2016/04/25/create-a-docker-swarm-cluster-using-azure-container-service/#comment-1015) to setup a swarm docker host cluster. There are 1 master and 2 agents.The good points for this article is to use "-H 172.16.0.5:2375" which creates new containers on "agent" rather than "master" one.
My question is: if I want to make docker-compose.yml work with that, how could I do it? I have tried command like:
docker-compose -H 172.16.0.5:2375 up
But it doesn't work. If I just use:
docker-compose up
Then the containers will be created on master host and I couldn't even use public DNS to visit the website.
Here is the yml file I use for 1 magento & 1 mariadb containers:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '3306:3306'
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
ports:
- '80:80'
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
And this section is from my guess based on that article,
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
but yml doesn't like port appended, e.g.
environment:
- MAGENTO_HOST=172.16.0.5:2375
- MARIADB_HOST=172.16.0.5:2375
Thanks a lot!