I am using docker-compose along with the docker-compose.yml file as the final step of my ci/cd pipeline to create/re-create containers on a server.
Code example:
sudo docker-compose up --force-recreate --detach
sudo docker image prune -f --all --filter="label=is_image_disposable=true"
My goal is to deploy and keep several containers from the same repo but with a different tags on a single server.
The problem is that docker-compose seems to remove existing containers of my repo before it creates new ones, even tho the existing container has tag :dev, and the new one has tag :v3.
As an example: before docker-compose command has been executed I had a running container named
my_app_dev container of the repo hub/my.app:dev,
and after the docker-compose command ran i have this
my_app_v3 container of the repo hub/my.app:v3.
What I do want to see in the end is both containers are up and running:
my_app_dev container of the repo hub/my.app:dev
my_app_v3 container of the repo hub/my.app:v3
Can someone give me an idea how can I do that?
That is expected behaviour. Compose works based on the concept of projects.
As long as the two compose operations are using the same project name, the configurations will override each other.
You can do what you want to some degree by using a unique project name for each compose up operation.
docker compose --project-name dev up
docker compose --project-name v3 up
This leads to the containers being prefixed with that specific project name. i.e. dev-app-1 and v3-app-1.
If they need to be all on the same network, you could create a network upfront and reference it as an external network under the default network key.
networks:
default:
name: shared-network
external: true
Related
My middleware team has a pipeline on azure devops that pulls a bunch of images from docker hub regularly and republish it on our private repository.
I would like to alter the pipeline to not only copy/paste the images, but also install our CA Root certificates.
The release pipeline consist of 3 steps:
a bunch of docker pull RemoteRepository.com/image:latest
docker tag RemoteRepository.com/image:latest InternalACR.io/image:latest
docker push InternalACR.io/image:latest
Because there's no dockerfile involved, I was wondering if it's possible to keep it that way.
What is the recommended approach here?
The simplest and recommended method is to use a Dockerfile. It is a simple task to take an existing container and then modify it to create a new container.
Example:
FROM mcr.microsoft.com/dotnet/#{baseImage}# AS base
COPY RootCA-1.crt /usr/local/share/ca-certificates/
COPY RootCA-SubCA-1.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
You can also run an existing container, modify it while the container is running. You can then commit the changes to a new container.
Example commands:
docker exec ...
docker cp ...
docker commit ...
Refer to this answer for an additional technique with Azure Pipeline:
https://stackoverflow.com/a/70088802/8016720
Well... I actually didn't do it without the dockerfile, but I created a template file and looped over on a pipeline. Like this: Azure Pipeline to build and push docker images in batch?
This is a for a CI server setup. The CI doesn't have tools like node installed, only Docker. So I have to run my tests inside a container.
This container will, in turn, create a second container to run the integration tests against.
The first container has mounted the /var/run/docker.sock so that it can create a second container. Both containers live side by side.
My build steps are the following:
Clone source code
Build docker image and tag it my-app
Run unit tests docker run ..... my-app yarn test
Run integration tests, which fire up a container. docker run -v /var/run/docker.sock:/var/run/docker.sock ..... my-app yarn test:integration
The problem is in the integration tests:
In summary, the first container calls yarn:integration which fires up the 2nd container running the app on port 3001, and then runs the tests against the 2nd container. Finally, it stops the second container.
The problem is that my integration tests in the first container attempt to hit the 2nd container through localhost:3001, but localhost is not the right host to the second container.
How can I access the second container from the first one, considering they are side by side (and not one within the other)?
localhost within the container doesn't point to the host machine, it points to the container itself. If you want to reach another container you need to use that container's actual IP which can be discovered by docker inspect <CONTAINER ID> and the internal port (i.e. not the one mapped to your host).
Alternatively, you can create a user-defined network and connect your containers to it. Then you will be able to use container names as hostnames, e.g. my-app:3001. Note that container name is the one specified by --name parameter of docker run. Also you need to use the container's internal port and not the one published with -p parameter.
I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.
I'm pretty new to docker containers. I understand there are ADD and COPY operations so a container can see files. How does one give the container access to a given directory where I can put my datasets?
Let's say I have a /home/username/dataset directory how do I make it at /dataset or something in the docker container so I can reference it?
Is there a way for the container to reference a directory on the main system so you don't have to have duplicate files. Some of these datasets will be quite large and while I can delete the original after copying it over .. that's just annoying if I want to do something outside the docker container with the files.
You cannot do that during the build time. If you want to do it during build time then you need to copy it into the context
Or else when you run the container you need to do a volume bind mount
docker run -it -v /home/username/dataset:/dataset <image>
Directories on host can be mapped to directories inside container.
If you are using docker run to start your container, then you can include -v flag to include volumes.
docker run --rm -v "/home/username/dataset:/dataset" <image_name>
If you are using a compose file, you may include volumes using:
volumes:
- /home/<username>/dataset:/dataset
For a detailed description of how to use volumes, you may visit Use volumes in docker
Suppose I have the following configuration file on my Docker host, and I want multiple Docker containers to be able to access this file.
/opt/shared/config_file.yml
In a typical non-Docker environment I could use symbolic links, such that:
/opt/app1/config_file.yml -> /opt/shared/config_file.yml
/opt/app2/config_file.yml -> /opt/shared/config_file.yml
Now suppose app1 and app2 are dockerized. I want to be able to update config_file.yml in one place and have all consumers (docker containers) pick up this change without requiring the container to be rebuilt.
I understand that symlinks cannot be used to access files on the host machine that are outside of the docker container.
The first two options that come to mind are:
Set up an NFS share from docker host to docker containers
Put the config file in a shared Docker volume, and use docker-compose to connect app1 and app2 to the shared config docker
I am trying to identify other options and then ultimately decide upon the best course of action.
What about host mounted volumes? If each application is only reading the configuration and the requirement is that it lives in different locations within the container you could do something like:
docker run --name app1 --volume /opt/shared/config_file.yml:/opt/app1/config_file.yml:ro app1image
docker run --name app2 --volume /opt/shared/config_file.yml:/opt/app2/config_file.yml:ro app2image
The file on the host can be mounted at a separate location per container. In Docker 1.9 you can actually have arbitrary volumes from specific plugins to hold the data (such as Flocker). However, both of these solutions are still per host and the data isn't available on multiple hosts at the same time.