On a Linux server, I've serveral Docker containers running. For example, some Compose-Stacks for Wordpress hosting. But also internal applications like Confluence. After a reboot, it seems that the internal containers were started first. So the hosting containers (like Wordpress) are down for several minutes.
That's not good, since the internal apps were used by a few persons, where the external ones have much more traffic. So I want to define some kind of priority: Like starting the Wordpress containers before the Confluence, to name a concret example.
How can this be done? All containers have the restart policy always. But it seems not possible to define in which orders the containers should start...
version 3+ : Version 3 no longer supports the condition form of running containers.
version 2 : depends_on will help your case if you do docker-compose up, but ignores when you run in swarm mode.
docker-compoopse.yml (works after version 1.6.0 and before 2.1)
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
DOCS :
depends_on
Controlling startup order in Compose
Related
I use docker with COMPOSE_PROJECT_NAME to dynamically create container names. Somewhere in the last 2 months something changed and my local machine generates container names with hypens.
e.g project-name-traefik-1 instead of project-name_traefik_1.
How can I change this behavior as this breaks functionality on my linux docker server which for some reason keeps the old container naming structure.
I have the latest Docker Desktop and the latest Docker on the server. I can't find anything in the documentation that points to this change.
Resorted to adding container_name (which wasn't previously needed).
I'm also using the COMPOSE_PROJECT_NAME environment variable to get predictable container names.
E.g.
...
nginx:
image: graffino/nginx:v1
restart: unless-stopped
container_name: ${COMPOSE_PROJECT_NAME}-nginx
...
so I have a static files (web app) running on container1, and a node js app that's running on container2, I want the node app the have writing access to the static files in container1. how can I achieve this?
what i tried so far :
docker compose, but it only allow for communication between container (network access), not sharing the same filesystem. Therefore, node can't access files on C1.
A way to do it is docker-compose volume
An example configuration yaml file for docker-compose v3 will be as below.
/share in host-os file system will be shared across these 2 containers
version: "3"
services:
webapp:
image: webapp:1.0
volumes:
- /share:/share
nodeapp:
image: nodeapp:1.0
volumes:
- /share:/share
Using a simple HTTP server (a simple node one can be found here) on one of the containers will allow you to host the static files. Then, this can be accessed from the other containers using the network all your containers are on.
Another option would be to mount a volume to both your containers. Any changes made via one container would reflect in the other if the same volume is mounted. More info can be found here.
I have this docker-compose.yml:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: universal-memcached2
redis:
image: redis:alpine
container_name: universal-redis2
mariadb:
image: mariadb:10.4
container_name: universal-mariadb2
working_dir: /application
volumes:
- .:/application
- "../data/db:/var/lib/mysql" # skasowac
environment:
- MYSQL_ROOT_PASSWORD=Haslo
- MYSQL_DATABASE=sample
- MYSQL_USER=user
- MYSQL_PASSWORD=Haslo
ports:
- "8083:3306"
webserver:
image: nginx:alpine
container_name: universal-webserver2
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: universal-php-fpm2
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
volumes:
generated:
var:
static:
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
environment:
PMA_HOST: mariadb
PMA_PORT: 3306
ports:
- '8081:80'
When I run my newest project in symfony 4 on it, it works very slowly... :(
I have new MacOs and Docker Desktop.
I'm currently learning the Symfony and Laravel framework, but this is very slow for Docker. It is not even working on it.
How can I repair it?
As a matter of fact, Docker needs a plain Linux kernel to run. Unfortunately, Mac OS and Windows cannot provide this. Therefore, there is a client on Mac OS to run Docker. In addition to this, there is an abstraction layer between Mac OS kernel and applications (Docker containers) and the filesystems are not the same. Because of that, Docker runs on Mac OS slowly. You are not able to run the Docker on Mac OS like on Linux.
If I need to give some instances about real use-cases. I have the same machine. So, I use Symfony 4 on Docker v18 on Mac OS Mojave. This is my Symfony total execution time on Docker. (Obviously, it depends on your frontend and database queries but I try to enlighten you for main logic.)
first time rendering 12000 ms
with Symfony cache: 344 ms
with Docker cache(:cached property of Docker for volumes): 195 ms
As long as I use Symfony without Docker, the following is my total execution time.
without Docker, with Symfony cache: 82 ms
Whereas, we could do some improvements to get better workspace. For example, you can use volumes like this,
volumes:
- .:/application:cached
As mentioned in other answers, the issue comes down to the way Docker interacts with the file system on OSX.
I found a great article on how to make things a lot faster on OSX:
https://vivait.co.uk/labs/docker-for-mac-performance-using-nfs
From the article, the real solution is to change the file system to use NFS, but as I know nothing about Docker (it was set up for me at work), I opted to use an option available in newer versions of Docker
I added :delegated to all of the options under volumes in docker-compose.yml. After restarting my containers, every was nearly twice as fast as before. Not as fast as native, but fast enough.
Example from article:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
working_dir: /app
user: "www-data"
volumes:
- ./:/app:delegated
- ./:/another_folder:delegated
The NFS example is quite long and requires basically copying half of the article so I won't add it in, but this :delegated solution is good place to start.
Using VirtioFS significantly improves Docker performance on macOS.
macOS 12.5 and Docker Desktop 4.15
Docker Desktop Preferences > General
[x] Use Virtualization framework (select this option)
Choose file sharing implementation for your containers: pick VirtioFS
macOS 12.3 (Intel) or 12.2 (Apple M1) and Docker Desktop 4.6, you can enable the experimental feature :
To enable virtiofs in Docker Desktop:
Ensure that you are using Docker Desktop version 4.6, available here
Navigate to ‘Preferences’ (the gear icon) > ‘Experimental Features’
Select the ‘Use the new Virtualization framework’ and ‘Enable VirtioFS accelerated directory sharing’ toggles
Click ‘Apply & Restart’
Speed boost achievement unlocked on Docker Desktop 4.6 for Mac - March 16 2022
On a MacBook Pro 2019, 2.6 GHz 6-Core Intel Core i7, even simple commands such as docker ps execute in less than a second. It used to take 3 seconds before.
One option would be using http://docker-sync.io/ But you will not get the same performance as you would in a Linux-based machine.
Use Mutagen to sync volumes
It's a known issue that Docker for Mac is slow when using shared volumes containing a big amount of files, see https://github.com/docker/for-mac/issues/77.
So, some workarounds came like :cached and docker-sync. However it's not totally satisfactory because it's still slow compared to native (Linux) and docker-sync consumes a lot of resources when syncing.
That being said, there is a good alternative, not to say perfect, which is Mutagen. It's almost as fast as native shared volumes with Linux!
A benchmark is available:
Source: article on Medium
I created a full example for a Symfony project to show how to use it, but it can be used for any type of project in any language.
I had the same issue, where requests could take upwards of 20 seconds!
The fix was to disable "Use the new Virtualization framework"
I also tried the "VirtioFS" experimental option which made it faster but it would constantly lock up. Disabling them both fixed all issues.
This is know. Issue with docker and the Mac... they are not best buddies, but they can play with each other.
Add :cached to the volumes and the slowness will be manageable.
To anyone still having this issue, I have read a post on gitHub by StefanPrintezis here.
Basically, when you're not debugging, turn off xDebug.
It worked like a charm for me.
Configuration : Macbook Pro 2018, PHPStorm 2021.3.1, Docker Desktop 4.5.0 (74594) with 2 CPU / 4GB RAM. Using default volume config.
I recommend using Parallels (the best virtualisation for macOS). Then:
Install Linux
Install ssh server
Install docker
If you use VSCode, you can install SSH extension and it is fast enough for editing files. You can also access your website on macOS via virtual machine's IP address.
I have a 3rd party Tomcat server image/container running in LinuxVM on Azure. The LinxVM actually started as 2 images(NGINX loadbalancer) running via a docker-compose script, but to test this on a webapp I've wittled down to just the single tomcat image. Now, the compose script uses the key:
ports:
- 80:8090
- 8445:8443
In the VM I can run the docker-compose script and hit http://mypage:80 and it works just fine. I can also run:
docker run -d -p <somePort>:8090 --name tomcat_1 <myrepo/myimage>
I can then access my site with http://mypage:<somePort> regardless of which port I want to map to the container. This all works great.
Now, with the Azure Web App, I'm using an Azure Web App for Containers --> Docker Compose (Preview). My compose script looks something like:
version: "3.0"
services:
pdfd-tomcat:
image: <myrepo/myimage>
build:
context: .
args:
INCLUDE_DEMO: 'true'
LIBRE_SUPPORT: 'false'
HTML_SUPPORT: 'false'
container_name: Blackbox
environment:
TRN_PDFNET_KEY:
TRN_DB_TYPE: SQLITE
TRN_DB_LINK:
TRN_BALANCER_COOKIE_NAME: HAPROXID
TRN_DEBUG_MODE: 'false'
ports:
- 80:8090
- 8445:8443
I've exposed 80:8090 because I've read that Azure Web Apps only expose port 80 and 443 by default. However, I cannot access this site from any port once the web app is spun up. I've verified running this same compose script works in a VM. Now, when I hit the web app logs, I see this:
Status: Image is up to date for <myrepo/myimage>
2018-06-17 05:38:41.298 INFO - Starting container for site
2018-06-17 05:38:41.298 INFO - docker run -d -p 18455:8090 --name tomcat_1 -e WEBSITE_SITE_NAME=<mywebsite> -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=<stuff goes here>
2018-06-17 05:38:41.298 INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2018-06-17 05:38:56.994 INFO - Started multi-container app
2018-06-17 05:38:57.008 INFO - Container tomcat_1 for site <mywebsite> initialized successfully.
So, it seems that it's trying to map external port: 18455 to my internal 8090 port. Why? Also, if I try to hit my site via that port, I can't. Each time the app deploys/restarts it maps a different external port.
Also, if I retroactively go to 'Application Settings' and use the key/value: WEBSITES_PORT:<current externally mapped port> it has literally no effect. Then if the app gets restarted/redeployed, I can see that WEBSITES_PORT:<port> is what the previous port was mapped with, but since that changes every deployment, the current external port and the WEBSITES_PORT values never match. I don't even know if it works to begin with.
How the heck do get this working in a deterministic manner? I can supply other material if needed.
This boiled down to a permissions issue when using Tomcat 9.0+ with Docker.
Permission problem while running tomcat as a non-root user
The Dockerfile would create a new usergroup and add a user, then give that user permissions in the folders where Tomcat existed. If you jumped into the container via docker exec /bin/bash and checked permissions, they all seemed perfectly fine. However, logs would show that Tomcat couldn't gain access to those folders.
Once I implemented the fix as described by Empinator in the link everything worked (using root also worked).
Is there a way I can reach my docker containers using names instead of ip addresses?
I've heard of pipework and I've seen some dns and hostname type options for docker, but I still am unable to piece everything together.
Thank you for your time.
I'm not sure if this is helpful, but this is what I've done so far:
installed docker container host using docker-machine and the vmwarevsphere driver
started up all the services with docker-compose
I can reach all of the services from any other machine on the network using IP and port
I've added a DNS alias entry to my private network DNS server and it matches the machine name that's used by docker-machine. But the machine always picks up a different IP address when it boots and connects to the network.
I'm just lost as to where to tackle this:
network DNS server
docker-machine hostname
docker container hostname
probably some combination of all of them
I'm probably looking for something similar to this question:
How can let docker use my network router to assign dhcp ip to containers easily instead of pipework?
Any general direction will be awesome...thanks again!
Docker 1.10 has a built in DNS. If your containers are connected to the same user defined network (create a network docker network create my-network and run your container with --net my-network) they can reference each other using the container name. (Docs).
Cool!
One caveat if you are using Docker compose you know that it adds a prefix to your container names, i.e. <project name>_<service name>-#. This makes your container names somewhat more difficult to control, but it might be ok for your use case. You can override the docker compose naming functionality by manually setting the container name in your compose template, but then you wont be able to scale with compose.
Create a new bridge network other than docker0, run your containers inside it and you can reference the containers inside that network by their names.
Docker daemon runs an embedded DNS server to provide automatic service
discovery for containers connected to user-defined networks. Name
resolution requests from the containers are handled first by the
embedded DNS server.
Try this:
docker network create <network name>
docker run --net <network name> --name test busybox nc -l 0.0.0.0:7000
docker run --net <network name> busybox ping test
First, we create a new network. Then, we run a busybox container named test listening on port 7000 (just to keep it running). Finally, we ping the test container by its name and it should work.
EDIT 2018-02-17: Docker may eventually remove the links key from docker-compose, therefore they suggest to use user-defined networks as stated here => https://docs.docker.com/compose/compose-file/#links
Assuming you want to reach the mysql container from the web container of your docker-compose.yml file, such as:
web:
build: .
links:
- mysql
mysqlservice:
image: mysql
You'll be pleased to know that Docker Compose already adds a mysqlservice domain name (in the web container /etc/hosts) which point to the mysql container.
Instead of looking for the mysql container IP address, you can just use the mysqlservice domain name.
If you want to add custom domain names, it's also possible with the extra_hosts parameter.
You might want to try out dnsdock. Looks straight forward and easy(!) to set up. Have a look at http://blog.brunopaz.net/easy-discover-your-docker-containers-with-dnsdock/ and https://github.com/tonistiigi/dnsdock .
If you want out of the box solution, you might want to check for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks between services. Thanks to that every service/container can be reached by service_name.kontena.local.
I changed the --net parameter with --network parameter and it runs as expected:
docker network create <network name>
docker run --network <network name> --name <container name> <other container options>
docker run --network <network name> --name <container name> <other container options>
If you are using Docker Compose, and your docker-compose.yml file has a top-level services: block (you are not using the obsolete "version 1" file format), then Compose does all of the required setup automatically. The names underneath services: can be directly used as host names.
version: '3.8'
services:
database: # <-- "database" is a usable hostname
image: postgres
application: # <-- "application" is a usable hostname
build: .
environment:
PGHOST: database # <-- use the "database" hostname
Networking in Compose in the Docker documentation describes this setup further.
These host names only work for connections between containers, in the same Compose file. If you manually declare networks: then the two containers must have some network in common, but the easiest setup is to just not declare networks: at all. These connections will only use the "standard" port (for PostgreSQL, for example, always connect to port 5432); a ports: declaration is not required and is ignored if present.