`docker service update` from remote server with cert - security

I want to update my docker stack automatically from my CI server but I don't figure out how to configure credentials to be able to drive docker from an external host.
I have enabled experimental mode on my server and it works fine in local with docker-machine.
My deploy script look like this:
echo $DOCKER_CERT > cert.pem # which other file ?
OPTS=" --tlsverify --host $DOCKER_DEPLOY_HOST --tlscert cert.pem" # which other args ???
docker $OPTS pull $REPO_IMAGE
docker $OPTS service update multiverse-prod_api
Is there a way (or planned in future version) to achieve this with just an ssh key ?
Thanks !!

You need to configure the docker server with a self signed cert, and then configure the client with a client cert signed from the same ca. The steps to create the certificates and configure the server and client are described by docker in their documentation.

Related

Fetch secrets and certificates from AzureKeyVault inside Docker container

I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow

Docker no basic auth credentials after succesfull login

I've moved to linux (pop_os 21.04) on my desktop and I'm having some issues with docker.
When I'm trying to run docker-compose to pull an image from a private registry I'm getting:
ERROR: Head "https://my.registry/my-image/manifests/latest": no basic auth credentials
Of course before running this command I've ran:
docker login https://my.registry.com -u user -p pass
which returns
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
And my config.json in my .docker folder show my credentials
{
"auths": {
"my.registry.com": {
"auth": "XXXXX"
}
}
}
To install docker I've followed instructions on their page https://docs.docker.com/engine/install/ubuntu/
And my version is:
Docker version 20.10.8, build 3967b7d
The same command ran on a macos system with Docker version 20.10.8 runs without any issues so I my password and all the urls are correct for sure.
Thanks for any help!
The login commands is
docker login my.registry.com
Without the https:// in front of the host. If you still have auth issues doing that:
if the registry uses an unknown TLS certificate, load that certificate on the host and restart the docker engine
if the registry is http instead of https, configure it as an insecure registry on /etc/docker/daemon.conf
if the login is successful, but the pull fails, verify your user has access to the specific repo on the registry
double check your password was correctly entered
check for a network proxy intercepting the request (the http_proxy variable)
I reinstalled the whole thing again as the docker page states, didn't work, so I uninstalled it and proceeded to install snap version, that didn't work neither and finally I removed it and went with simple apt-get install docker.io and it works like a charm! I don't know why it didn't work previously but I won't lose more sleep over it.
On Ubuntu 20.x, I observed that the credentials are stored in home/<username>/snap/docker/1125/.docker/config.json.
If older credentials are stored in $HOME/.docker/config.json, they are not used by docker pull. Verify if docker is indeed picking up the credentials from the right config.json location.

Is it possible to configure gitlab builtin container registry with self-signed certs?

I'm using a docker gitlab/gitlab-ce:12.7.2-ce.0 image to run a GitLab. I'm trying to use a built-in container registry feature. Documentation sais: "If you are using the Omnibus GitLab built in Let’s Encrypt integration, as of GitLab 12.5, the Container Registry will be automatically enabled on port 5050 of the default domain.". Is it possible to configure GitLab builtin container registry with self-signed certs?
After a few tests, the configuration presented in the https://docs.gitlab.com/ee/administration/packages/container_registry.html turned out to be correct.
In addition, I placed the entire CA certificate path in /etc/gitlab/trusted-certs (in PEM format) so that when the GitLab container starts, the appropriate symlinks appear in the /opt/gitlab/embedded/ssl/certs directory.

Connecting docker-machine to Azure using the generic driver

I have a Docker-based deployment on Azure. I know that docker-machine has a Azure driver, which can create VMs and generate the certs, etc.. But I'd rather use the Azure tools (CLI and portal).
So I created a VM, and installed my public SSH key on it. And now I'd like to connect to it using docker-machine. I add the server, so that I can see it when I do docker-machine ls:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
serv - generic Running tcp://XX.XX.XX.XX:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
When I try to set the environment variables, I see this:
$ docker-machine env serv
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "XX.XX.XX.XX:2376":
open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
When I try to regennerate-certs, I get:
$ docker-machine regenerate-certs serv
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Something went wrong running an SSH command!
command : sudo hostname serv && echo "serv" | sudo tee /etc/hostname
err : exit status 1
output : sudo: no tty present and no askpass program specified
I can SSH to the server fine.
What's the issue here? How can I make it work?

Obtaining Docker public key.json file

I see a /etc/docker/key.json on Fedora 23 machine. This file seems like a private key for authentication
https://github.com/docker/docker/issues/7667
At what time is it generated ( its not present in output of rpmls docker ), and how do I obtain a corresponding public key?
My usecase is to enable a non-root user to run docker ps command without sudo i.e. by the use of public/private keys.
What should I do?
You don't care about the key.json file, at least as far as I understand your question.
If you want to enable unprivileged users to connect to your Docker daemon using certificates for authentication, you will first need to enable a listening HTTP socket (either binding to localhost or to a public address if you to provide access to the daemon from somewhere other than the docker host), and then you will need to configure appropriate SSL certificates as described in the documentation.
You can also provide access to Docker by managing the permissions on the Docker socket (typically /var/run/docker.sock).
Note that giving someone access to docker is equivalent to giving them root access (because they can always run docker run -v /etc:/hostetc ... and then edit your sudoers configuration or passwd and shadow files, etc.

Resources