I use docker with COMPOSE_PROJECT_NAME to dynamically create container names. Somewhere in the last 2 months something changed and my local machine generates container names with hypens.
e.g project-name-traefik-1 instead of project-name_traefik_1.
How can I change this behavior as this breaks functionality on my linux docker server which for some reason keeps the old container naming structure.
I have the latest Docker Desktop and the latest Docker on the server. I can't find anything in the documentation that points to this change.
Resorted to adding container_name (which wasn't previously needed).
I'm also using the COMPOSE_PROJECT_NAME environment variable to get predictable container names.
E.g.
...
nginx:
image: graffino/nginx:v1
restart: unless-stopped
container_name: ${COMPOSE_PROJECT_NAME}-nginx
...
Related
I am using docker-compose along with the docker-compose.yml file as the final step of my ci/cd pipeline to create/re-create containers on a server.
Code example:
sudo docker-compose up --force-recreate --detach
sudo docker image prune -f --all --filter="label=is_image_disposable=true"
My goal is to deploy and keep several containers from the same repo but with a different tags on a single server.
The problem is that docker-compose seems to remove existing containers of my repo before it creates new ones, even tho the existing container has tag :dev, and the new one has tag :v3.
As an example: before docker-compose command has been executed I had a running container named
my_app_dev container of the repo hub/my.app:dev,
and after the docker-compose command ran i have this
my_app_v3 container of the repo hub/my.app:v3.
What I do want to see in the end is both containers are up and running:
my_app_dev container of the repo hub/my.app:dev
my_app_v3 container of the repo hub/my.app:v3
Can someone give me an idea how can I do that?
That is expected behaviour. Compose works based on the concept of projects.
As long as the two compose operations are using the same project name, the configurations will override each other.
You can do what you want to some degree by using a unique project name for each compose up operation.
docker compose --project-name dev up
docker compose --project-name v3 up
This leads to the containers being prefixed with that specific project name. i.e. dev-app-1 and v3-app-1.
If they need to be all on the same network, you could create a network upfront and reference it as an external network under the default network key.
networks:
default:
name: shared-network
external: true
so I have a static files (web app) running on container1, and a node js app that's running on container2, I want the node app the have writing access to the static files in container1. how can I achieve this?
what i tried so far :
docker compose, but it only allow for communication between container (network access), not sharing the same filesystem. Therefore, node can't access files on C1.
A way to do it is docker-compose volume
An example configuration yaml file for docker-compose v3 will be as below.
/share in host-os file system will be shared across these 2 containers
version: "3"
services:
webapp:
image: webapp:1.0
volumes:
- /share:/share
nodeapp:
image: nodeapp:1.0
volumes:
- /share:/share
Using a simple HTTP server (a simple node one can be found here) on one of the containers will allow you to host the static files. Then, this can be accessed from the other containers using the network all your containers are on.
Another option would be to mount a volume to both your containers. Any changes made via one container would reflect in the other if the same volume is mounted. More info can be found here.
I have a docker compose file that looks like this
version: '3'
services:
webapp:
build: '.'
ports:
- "8000:8000"
networks:
- db
postgres:
image: "postgres:alpine"
environment:
POSTGRES_PASSWORD: "password"
volumes:
- "./scripts:/docker-entrypoint-initdb.d"
networks:
- db
networks:
db:
The scripts folder looks like this:
|- scripts
|-- init.sh
|-- init.sql
The Problem
My workflow for this project is progressive, so I add some SQL initialization data on my host OS, run sudo docker-compose down -v and then sudo docker-compose up. I did not update my user to not need the use of sudo for this scenario.
When I update the init.sh file, then these updates are reflected each time I run docker-compose up. The init.sql file however, only remembers the first "version" of this file. Any subsequent updates are ignored when running docker-compose up.
Things I tried
Tried sudo docker-compose up --renew-anon-volumes --force-recreate which also does not seem to help.
Tried pruning all the volumes with sudo docker volume prune. Does not help
Tried pruning the docker system with sudo docker system prune
What does work is if I copy the file and it's content to a new file name. Renaming the file does not work
So the question is simply, how do I get content updates of init.sql to be recognized by my docker compose setup?? I don't understand why changes to init.sh is picked up but changes to init.sql are ignored?
UPDATE
One important piece of information is that the project is sitting on a virtualbox shared folder, so the underlying file system is vboxsf while all of this is happening.
So it turns out that the underlying file system is playing a role here when using Docker Volumes. I have been using a virtualbox vm and the project was sitting on a vboxsf file system. So when attaching a volume in my docker compose scenario(?), it has been attaching to a vboxsf volume this whole time.
When I moved the project from the vboxsf filesystem to something else (whatever my home directory filesystem has, ext4 I think) then updates to the files worked as expected.
-----------I speak under correction here, link is important to track--------------
My understanding is that the way vboxsf works is that changes are broadcasted between host and guest filesystems and this is picked up by the host and guest OS. There is also an aspect about how shared memory is accessed, but I really don't have that kind of knowledge to elaborate on it further.
To understand the issue, this link seems to be the best resource for now:
https://www.virtualbox.org/ticket/819?cversion=0&cnum_hist=70
-------------------End----------------------------------
I don't think that this will be a problem in production, but it will definitely make you question your sanity for local development.
So please, when you are using a Linux VM for development, check which filesystem your Docker Volumes are using before you even start working on a project.
There is no error messages at all, which is the one of the worst circumstances to be in when debugging this problem!!
I also wasted about 2 days worth of life trying to figure out what's going on and how to fix it. Hopefully this 2 wasted days can result in many days saved instead :D
On a Linux server, I've serveral Docker containers running. For example, some Compose-Stacks for Wordpress hosting. But also internal applications like Confluence. After a reboot, it seems that the internal containers were started first. So the hosting containers (like Wordpress) are down for several minutes.
That's not good, since the internal apps were used by a few persons, where the external ones have much more traffic. So I want to define some kind of priority: Like starting the Wordpress containers before the Confluence, to name a concret example.
How can this be done? All containers have the restart policy always. But it seems not possible to define in which orders the containers should start...
version 3+ : Version 3 no longer supports the condition form of running containers.
version 2 : depends_on will help your case if you do docker-compose up, but ignores when you run in swarm mode.
docker-compoopse.yml (works after version 1.6.0 and before 2.1)
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
DOCS :
depends_on
Controlling startup order in Compose
Is there a way I can reach my docker containers using names instead of ip addresses?
I've heard of pipework and I've seen some dns and hostname type options for docker, but I still am unable to piece everything together.
Thank you for your time.
I'm not sure if this is helpful, but this is what I've done so far:
installed docker container host using docker-machine and the vmwarevsphere driver
started up all the services with docker-compose
I can reach all of the services from any other machine on the network using IP and port
I've added a DNS alias entry to my private network DNS server and it matches the machine name that's used by docker-machine. But the machine always picks up a different IP address when it boots and connects to the network.
I'm just lost as to where to tackle this:
network DNS server
docker-machine hostname
docker container hostname
probably some combination of all of them
I'm probably looking for something similar to this question:
How can let docker use my network router to assign dhcp ip to containers easily instead of pipework?
Any general direction will be awesome...thanks again!
Docker 1.10 has a built in DNS. If your containers are connected to the same user defined network (create a network docker network create my-network and run your container with --net my-network) they can reference each other using the container name. (Docs).
Cool!
One caveat if you are using Docker compose you know that it adds a prefix to your container names, i.e. <project name>_<service name>-#. This makes your container names somewhat more difficult to control, but it might be ok for your use case. You can override the docker compose naming functionality by manually setting the container name in your compose template, but then you wont be able to scale with compose.
Create a new bridge network other than docker0, run your containers inside it and you can reference the containers inside that network by their names.
Docker daemon runs an embedded DNS server to provide automatic service
discovery for containers connected to user-defined networks. Name
resolution requests from the containers are handled first by the
embedded DNS server.
Try this:
docker network create <network name>
docker run --net <network name> --name test busybox nc -l 0.0.0.0:7000
docker run --net <network name> busybox ping test
First, we create a new network. Then, we run a busybox container named test listening on port 7000 (just to keep it running). Finally, we ping the test container by its name and it should work.
EDIT 2018-02-17: Docker may eventually remove the links key from docker-compose, therefore they suggest to use user-defined networks as stated here => https://docs.docker.com/compose/compose-file/#links
Assuming you want to reach the mysql container from the web container of your docker-compose.yml file, such as:
web:
build: .
links:
- mysql
mysqlservice:
image: mysql
You'll be pleased to know that Docker Compose already adds a mysqlservice domain name (in the web container /etc/hosts) which point to the mysql container.
Instead of looking for the mysql container IP address, you can just use the mysqlservice domain name.
If you want to add custom domain names, it's also possible with the extra_hosts parameter.
You might want to try out dnsdock. Looks straight forward and easy(!) to set up. Have a look at http://blog.brunopaz.net/easy-discover-your-docker-containers-with-dnsdock/ and https://github.com/tonistiigi/dnsdock .
If you want out of the box solution, you might want to check for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks between services. Thanks to that every service/container can be reached by service_name.kontena.local.
I changed the --net parameter with --network parameter and it runs as expected:
docker network create <network name>
docker run --network <network name> --name <container name> <other container options>
docker run --network <network name> --name <container name> <other container options>
If you are using Docker Compose, and your docker-compose.yml file has a top-level services: block (you are not using the obsolete "version 1" file format), then Compose does all of the required setup automatically. The names underneath services: can be directly used as host names.
version: '3.8'
services:
database: # <-- "database" is a usable hostname
image: postgres
application: # <-- "application" is a usable hostname
build: .
environment:
PGHOST: database # <-- use the "database" hostname
Networking in Compose in the Docker documentation describes this setup further.
These host names only work for connections between containers, in the same Compose file. If you manually declare networks: then the two containers must have some network in common, but the easiest setup is to just not declare networks: at all. These connections will only use the "standard" port (for PostgreSQL, for example, always connect to port 5432); a ports: declaration is not required and is ignored if present.