My middleware team has a pipeline on azure devops that pulls a bunch of images from docker hub regularly and republish it on our private repository.
I would like to alter the pipeline to not only copy/paste the images, but also install our CA Root certificates.
The release pipeline consist of 3 steps:
a bunch of docker pull RemoteRepository.com/image:latest
docker tag RemoteRepository.com/image:latest InternalACR.io/image:latest
docker push InternalACR.io/image:latest
Because there's no dockerfile involved, I was wondering if it's possible to keep it that way.
What is the recommended approach here?
The simplest and recommended method is to use a Dockerfile. It is a simple task to take an existing container and then modify it to create a new container.
Example:
FROM mcr.microsoft.com/dotnet/#{baseImage}# AS base
COPY RootCA-1.crt /usr/local/share/ca-certificates/
COPY RootCA-SubCA-1.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
You can also run an existing container, modify it while the container is running. You can then commit the changes to a new container.
Example commands:
docker exec ...
docker cp ...
docker commit ...
Refer to this answer for an additional technique with Azure Pipeline:
https://stackoverflow.com/a/70088802/8016720
Well... I actually didn't do it without the dockerfile, but I created a template file and looped over on a pipeline. Like this: Azure Pipeline to build and push docker images in batch?
Related
I am using docker-compose along with the docker-compose.yml file as the final step of my ci/cd pipeline to create/re-create containers on a server.
Code example:
sudo docker-compose up --force-recreate --detach
sudo docker image prune -f --all --filter="label=is_image_disposable=true"
My goal is to deploy and keep several containers from the same repo but with a different tags on a single server.
The problem is that docker-compose seems to remove existing containers of my repo before it creates new ones, even tho the existing container has tag :dev, and the new one has tag :v3.
As an example: before docker-compose command has been executed I had a running container named
my_app_dev container of the repo hub/my.app:dev,
and after the docker-compose command ran i have this
my_app_v3 container of the repo hub/my.app:v3.
What I do want to see in the end is both containers are up and running:
my_app_dev container of the repo hub/my.app:dev
my_app_v3 container of the repo hub/my.app:v3
Can someone give me an idea how can I do that?
That is expected behaviour. Compose works based on the concept of projects.
As long as the two compose operations are using the same project name, the configurations will override each other.
You can do what you want to some degree by using a unique project name for each compose up operation.
docker compose --project-name dev up
docker compose --project-name v3 up
This leads to the containers being prefixed with that specific project name. i.e. dev-app-1 and v3-app-1.
If they need to be all on the same network, you could create a network upfront and reference it as an external network under the default network key.
networks:
default:
name: shared-network
external: true
I've pushed an Image (which is a version of R + some libraries) on my private Azure Container Registry. How can build an Image starting from this Image?
In other words, I want to do "FROM registry/env:version" but I'm pretty sure that I need to use other settings to access my repository.
Thanks for help!
You should login your Docker daemon to your Azure Container Registry using the following command : docker login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD
Then, using the fully qualified path for your image in the Dockerfile should work automatically, as long as the identity provided in the first step (login) has the rights to pull this image.
Sorry I'm trying to figure out your answer, I'm trying to pull a docker image from my Azure Container Registry and build it and push it back to a new repository. I'm starting my Dockerfile as
FROM xxxxxx.azurecr.io/php-7.4:latest AS compiled
how to configure the docker daemon in azure pipelines world?
Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.
I always push local docker image to azure container registry using given commands for testing:
docker build . -t [registry location].azurecr.io/[project]:SNAPSHOT-1
docker tag [registry location].azurecr.io/[project]:SNAPSHOT-1 [registry location].azurecr.io/[project]:SNAPSHOT-1
docker push [registry location].azurecr.io/[project]:SNAPSHOT-1
When I do this, it creates docker image in my local, and also in the azure container registry in [registry location] -> [project] -> SNAPSHOT-1.
I read the documentation from docker website, but just need clarification. From docker it says:
When the URL parameter points to the location of a Git repository, the repository acts as the build context. The system recursively fetches the repository and its submodules. The commit history is not preserved. A repository is first pulled into a temporary directory on your local host. After that succeeds, the directory is sent to the Docker daemon as the context. Local copy gives you the ability to access private repositories using local user credentials, VPN’s, and so forth.
So it builds an image in Docker Daemon(?) in my local using Git repository build context(?) that is how I understand it.
docker tag is the one that I am not sure of. It says:
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Since I have same SOURCE_IMAGE and TARGET_IMAGE, what does it mean? Is SOURCE_IMAGE the image from local(?) Docker Daemon that I just built? Why is this necessary?
Lastly, I have docker push ....... Does it create image in azure container registry (acr) because I defined it in the ......, for example, dev.azurecr.io/[project]:tag. I thought it is the docker image I built. Does it just know from the docker image tag(?) to push image to acr -> dev as tag name?
I am sorry if this are stupid questions... I have worked with docker and acr for some time, but I only know how it works but never understand how exactly what it means. Can someone please explain to me each line command in simple way? I really appreciate it!
Well, the first you need to know is that the image name and the image tag. Usually, we create the image without the container registry URL, something like image_name:tag. Actually, it means it's you only can manage the image in your local machine. When you want to pull the image in another machine, the image name will change like https://registry_url/image_name:tag. So you know, if you want to push the image to the remote registry, you need to tag the image like registry_url/image_name:tag.
And when you build the docker image, it will create the context of the image and store it in your local machine, then you push the image to remote registry, it means you copy the image context into the remote registry.
Finally, be honest, you don't need the second command. The first and the third are enough. The first create the docker image context in your local machine, and the third push the image into the ACR.
I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.