Authenticate private registry (ACR) with Minikube on developer machine - azure

At my company, we're setting up an on-prem k8s cluster in combination with a private image repository hosted on Azure (Azure Container Registry).
For development purposes, I'd like our developers to be able to run the apps they create locally using minikube. ACR offers several ways to authenticate, including:
Indiviual login
Access Token
Service Principal
When developing locally, and using the Docker CLI, individual authentication can be setup by running az acr login -n my-repository.azurecr.io. We manage all SSO authentication through Azure Active Directory, and Docker Desktop comes with the docker-credentials-wincred.exe extension, to delegate handling of authentication to the Windows Credential Store. This is specified in ~\.docker\config.json. Pretty neat and seamless, love it.
However, when authenticating k8s to work with a ACR, most documentation steers you towards setting up a Service Principal, and storing credentials in a k8s secret. For a production environment this makes perfect sense, but for a developer machine it feels a bit weird to authenticate using headless credentials.
An alternative is to pull images manually, and set imagePullPolicy to IfNotPresent in the k8s manifest. The tricky part here is that Minikube runs in a separate VM, with its own Docker daemon. By default, Docker CLI is not configured to connect with this daemon. Minikube exposes some control over the daemon through the minikube CLI (e.g. minikube image pull) but here we run into authorization issues. Alternatively, we can configure the Docker CLI - which is already configured to use the Windows credentials store - to connect to the minikube daemon. This is pretty straight-forward using minikube docker-env.
So, this works... but my question is: isn't there a more sensible way to do this? Can't minikube image somehow be configured to work with the windows credentials store?

Related

Run Docker in production with environment variables that are secret and cannot be seen on the server

I need to send environment variables to my application running in a container but I understand that it is bad practice that the ".env" file is on the server since the "root" user could read it. What would be the best option to use these variables in my application and leave no trace on the server and without using Kubernetes?
There are several solutions, depending on your actual production stack:
(1) Running on a k8s cluster
Kubernete supports user uploading binary as a secret. You could mount the secret to your production pod to decouple your docker image and the secret.
https://kubernetes.io/docs/concepts/configuration/secret/
(2) Docker on a standalone server
This is an isomorphic solution to (1), but without native support from k8s.
https://docs.docker.com/storage/volumes/
(3) External Key management service
If you are using hosting your application on cloud, there are much more options for you to consider. Take azure as example, if you are hosting your application on a virtual machine, you could use service like Azure KeyVault:
https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts
The concept is that all your key is stored and obtained via connecting your server to the service. You could have the secret loaded in your application on the fly fetching from KeyVault, which prevent leaving secret footprint in your service instance. The connection between Key Management Service and your virtual machine could be configured in a password less way (iam in aws / managed identity in azure) to prevent having secret in your server.

Keycloak docker

how can I deploy Keycloak docker in an azure container instance?
keycloak docker image that is provided by jboss/keyckloak keeps restarting in azure container instance after deployment. need help
You don`t need to deploy keycloak in the Azure container registry. You can use jboss/keyckloak docker image. In my experience, reload might happen in case of resource luck or wrong configurations. Try to pick a bigger VM.

Finding the container id of a customer docker image running under Azure app service

We just deployed our custom docker image on Azure Webapp services. I think the docker is running fine but we need to know the container-id of the specific webapp service.
We checked the logs on Azure portal for the app service instance; had no luck.
We tried to SSH to the container to grab the container ID; had a "connection refused" error.
Does anyone know how to easily grab the container id of a webapp that runs based on docker on Azure?
Thanks
I'm not sure what do you mean by container id, but what you can get from the Web App are the instance id and container name. Both of them you can get from the logs in the container setting.
And that you want to connect to the container but it failed. The reason is that you didn't enable the SSH in your custom image. You can follow the steps here to enable the SSH in the custom image.

Configuration for application in docker container on cluster

How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.
What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.

Upgrade Azure docker extension

I am trying to use docker-machine to manage the docker instance running in our VM. I had started the VM a while back, and I believe had also installed docker via the "Azure Docker extension".
When I try to set things up with docker-machine, I noticed that I didn't have the certs on my laptop. Logging in to the VM, I found out that there are no certs in /etc/docker. I also noticed that the docker image on the server is pretty old (1.8.1).
How can I upgrade docker to the latest version on this VM? Would I lose my VMs if I did so? I'm not sure how to deal with this "Azure Docker extension".
Would this also re-generate the certs in /etc/docker, so that I can set up docker-machine?
One way to update the certs from the Azure portal : you can add the docker extension again on your VM and in the extension options you can specify the new certificates you want to use. You will probably also have to reboot the VM after re-applying the extension to make it work properly.

Resources