How to run or use multiple registry mirror? - linux

I know how to run a registry mirror
docker run -p 5000:5000 \
-e STANDALONE=false \
-e MIRROR_SOURCE=https://registry-1.docker.io \
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
registry
and how to use it
docker --registry-mirror=http://10.0.0.2:5000 -d
But how can I use multiple registry mirror.
This is what I need:
Docker hub mirror
Google container registry mirror for k8s
Private registry
So I have to make tow registry mirror and a private registry.I want to docker run registry mirror 1st and 2nd registry, and one more docker run registry hold my private registry. The client will use three of these registry.
I have no clue of how to do this,I think this is a common use case, please help, thanks.

You can use a PullSecret to tell Kubernetes what registry to get your containers from. Please see:
http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod

Related

Update nodejs docker container env variables in container [duplicate]

If I have a docker container that I started a while back, what is the best way to set an environment variable in that running container? I set an environment variable initially when I ran the run command.
$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress
but now that it has been running for a while I want to add another VIRTUAL_HOST to the environment variable. I do not want to delete the container and then just re-run it with the environment variable that I want because then I would have to migrate the old volumes to the new container, it has theme files and uploads in it that I don't want to lose.
I would just like to change the value of VIRTUAL_HOST environment variable.
There are generaly two options, because docker doesn't support this feature now:
Create your own script, which will act like runner for your command. For example:
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
Run your command following way:
docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"
Docker doesn't offer this feature.
There is an issue: "How to set an enviroment variable on an existing container? #8838"
Also from "Allow docker start to take environment variables #7561":
Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.
For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:
You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)
This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.
However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.
To:
set up many env. vars in one step,
prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),
you can use
--env-file key_value_file.txt
option:
docker run --env-file key_value_file.txt $INSTANCE_ID
Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8
Live Restore
First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.
sudo vim /etc/docker/daemon.json
My file looks like this:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}
Taken from here.
Get the Container ID
Save the ID of the container you want to edit for easier access to the files.
export CONTAINER_ID=`docker inspect --format="{{.Id}}" <YOUR CONTAINER NAME>`
Edit Container Configuration
Edit the configuration file, go to the "Env" section, and add your key.
sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json
My file looks like this:
...,"Env":["TEST=1",...
Stop and Start Docker
I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.
sudo systemctl stop docker
sudo systemctl start docker
Because of live-restore, your containers should stay up.
Verify That It Worked
docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'
Single quotes are important here.
You can also verify that the uptime of your container hasn't changed:
docker ps
You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.
You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:
$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres
So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.
A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.
In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.
After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.
If you are running the container as a service using docker swarm, you can do:
docker service update --env-add <you environment variable> <service_name>
Also remove using --env-rm
To make sure it's addedd as you wanted, just run:
docker exec -it <container id> env
1. Enter your running container:
sudo docker exec -it <container_name> /bin/bash
2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:
printenv | grep -v "no_proxy" >> /etc/environment
3. Stop and Start the container
sudo docker stop <container_name>
sudo docker start <container_name>
Firstly you can set env inside the container the same way as you do on a linux box.
Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.
here is how to update a docker container config permanently
stop container: docker stop <container name>
edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker
I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one
docs.docker.com/engine/reference/commandline/commit
docker commit [container-id] [tag]
docker commit b0e71de98cb9 stack-overflow:0.0.1
then you can pass environment vars or file
docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1
the quick working hack would be:
get into the running container.
docker exec -it <container_name> bash
set env variable,
install vim if not installed in the container
apt-get install vim
vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021
source ~/.profile
check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)
Now, you can run whatever you're running outside of the container from inside the container.
Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.
After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .
So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1].
In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .
If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]
[1]
$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]
[2]
/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
[3]
docker restart dev-php (container name)
Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.
There is a way to recreate container with new environment settings and use it for some time.
1. Create new image from runnning container:
docker commit my-service
a1b2c3d4e5f6032165497
Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.
2. Stop and rename original container:
docker stop my-service
docker rename my-service my-service-original
3. Create and start new container with modified environment:
docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497
Here, I did the following:
created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
configured its mounts and networks
added my custom environment configuration
4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:
docker rename my-service-original my-service
docker start my-service
How to set environment variable in a running docker container as a development environment
Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.
Instructions
Using VScode attach to your running container
Then with VScode open the ~/.bashrc file
Export your variable by adding the code in the end of the file
export MY_VAR="value"
Finally execute .bashrc using source command
source ~/.bashrc
You could set an environment variable to a running Docker container by
docker exec -it -e "your environment Key"="your new value" <container> /bin/bash
Verify it using below command
printenv
This will update your key with the new value provided.
Note: This will get reverted back to old on if docker gets restarted.
Use export VAR=Value
Then type printenv in terminal to validate it is set correctly.

Azure pipelines connect to private docker registry

I have a dedicated server with a private docker registry setup so I can push and pull images. I can connect to this server via docker login <my_domain>. I need to build an image using Azure Pipelines and push it to my registry, but when I try to make a docker connection, there is no way to access the private registry. Only docker hub, azure registry and "other" which still require docker id and password. Is there a way to connect Azure to my registry?
Ok, i figured out myself. It's not possible in this task, but u can change task from "Docker build" to "Bash script" and do docker login <your_domain>:<port> -u <your_username> -p <your_password> and then run anything you want, here specifically docker build.

Gitlab generate Not found URL for user and repository

I installed Gitlab on my own Server(CentOS7) with docker and portainer and I create one user.
everything seems okay but when I clicked on the user URL I see this message:
I don't know why but it generates http://0de09c2e3bc1/Parisa_hr randomly. means 0de09c2e3bc1
on the other hand, when I want to clone my repo I have problems too. the URL that it generates is git#0de09c2e3bc1:groupname/projectname.git and http://0de09c2e3bc1/groupname/projectname.git
I got this error as I want to clone it :
ssh: Could not resolve hostname 0de09c2e3bc1: Temporary failure in
name resolution fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
I don't know which things make it create 0de09c2e3bc1, I think I should have seen my IP address.
I noticed that 0de09c2e3bc1 is the name of portainer because as I checked its console I see it.
root#0de09c2e3bc1:/#
now, how can I fix it?
I also changed external_url to https://IP:port of my server but it didn't work.
Double-check your external-url which is part of the generated URL on each query.
This gist about installing portainer and gitlab shows a docker run like:
docker run --detach \
--name gitlab \
--publish 8001:80 \
--publish 44301:443 \
--publish 2201:22 \
--hostname gitlab.c2a-system.dev \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.c2a-system.dev/'; gitlab_rails['gitlab_shell_ssh_port'] = 2201;" \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
--restart unless-stopped \
gitlab/gitlab-ce:latest
See Pre-configure Docker container, using the environment variable GITLAB_OMNIBUS_CONFIG.
Then, for accessing a private repository like http://gitlab.c2a-system.dev/groupname/projectname.git, you will need to define a credential helper and store your PAT:
git config --global credential.helper cache
printf "host=gitlab.c2a-system.dev\nprotocol=http\nusername=YourGitLabAccount\npassword=YourGitLabToken"|\
git credential-cache store

Are Azure container instances made for running simple command with simple output?

i am trying to make use of Azure Instances and i need some explanation about the service itself.
I want to use ACI to launch the docker running a command prompting the output of the command and stop the docker.
Is ACI the good service for that kind of things ?
The Docker file look like this.
FROM alpine
RUN apk add ffmpeg
CMD ffprobe -show_streams -show_format -loglevel warning -v quiet -print_format json /input.video
The docker run command to make it work look like this
docker run --name ffprobe-docker -i -v /path/test.ts:/input.video --rm 72e84b2825af
The issue ?
I am not able to launch my script like i can make it work on my machine on azure
What i have done?
I created a private registery where i uploaded my Image.
I ran az container createcommand witch created the ressource
Now i don't know what to do next in order to make it work as expected?
because the container is terminated and the az container exec --exec-command is not showing anything on the terminal once the command is ended.
For ACI, you can create it from your own Docker image in the ACR or other Registries. You can also run the command in it. But you should pay attention to that you cannot run the Docker command in it, because you can not nest container in it. It cannot be a Docker server. It just can be a container.
If you use the CLI command az container exec --exec-command then it will like this:
And the command as the --exec-command parameter should a bash command that can run in your Docker image.
I think the biggest advantage of ACI is the fastest and simplest, and without having to manage any virtual machines and without having to adopt a higher-level service.
Hope this will help you. Any more question please give me the message.

How to make an Azure VM & configure containers to use Azure File Storage via docker CLI / quickstart terminal?

I'm using the latest Docker Toolbox and I would like to launch docker containers on Azure that connect to an Azure File Store. What should one run to achieve this from the docker quick start terminal?
The easiest way to do this is to create an Ubuntu VM with Docker preinstalled on Azure:
https://azure.microsoft.com/en-us/blog/introducing-docker-in-microsoft-azure-marketplace/
Then follow the Azure File System Docker Volume Driver install instructions here:
https://github.com/Azure/azurefile-dockervolumedriver/blob/master/contrib/init/systemd/README.md
Once you can successfully create volumes on that VM, you can make them shared volumes or Data Volume Containers to share them between your Docker containers:
https://docs.docker.com/engine/tutorials/dockervolumes/
For more generic instructions, please use #rbj325's answer
Create docker-machine
First things first, we need an azure VM which we can use. We can use the docker-machine cli to create this. This set of instructions will create it with the ubuntu 16.04LTS to simplify(ish) installation steps.
docker-machine create --driver azure --azure-subscription-id XXXX \
--azure-location westeurope --azure-resource-group XXX \
--azure-image canonical:UbuntuServer:16.04.0-LTS:latest XXXXXX
This sets up everything we need on Azure.
Install azure file storage docker plugin
(Based on my knowledge of SSH) We then need to SSH into the docker-machine to be able to install the plugin.
docker-machine XXXXXX ssh
Once in, the following steps can be taken to install the plugin:
sudo -s
wget -qO /usr/bin/azurefile-dockervolumedriver https://github.com/Azure/azurefile-dockervolumedriver/releases/download/[VERSION]/azurefile-dockervolumedriver
chmod +x /usr/bin/azurefile-dockervolumedriver
wget -qO /etc/systemd/system/azurefile-dockervolumedriver.service https://raw.githubusercontent.com/Azure/azurefile-dockervolumedriver/master/contrib/init/systemd/azurefile-dockervolumedriver.service
cp [myconfigfile] /etc/default/
systemctl daemon-reload
systemctl enable azurefile-dockervolumedriver
systemctl start azurefile-dockervolumedriver
systemctl status azurefile-dockervolumedriver
Note that there are to things required here:
the latest version number for the driver from github
a file containing some azure storage credentials
For my installation process, I made a script that I could use and put my config file in a secure store that could be retrieved at install time. Please note it is gets the driver version 0.2.1.
Once this has completed, exit the ssh connection.
Create volumes
You should now be able to create docker volumes
docker volume create --name filestore -d azurefile -o share=filestore
Create docker containers
You can now use this volume with docker containers
docker run -it --name=example -v filestore:/filestore ubuntu /bin/bash

Resources