Upgrade Azure docker extension - azure

I am trying to use docker-machine to manage the docker instance running in our VM. I had started the VM a while back, and I believe had also installed docker via the "Azure Docker extension".
When I try to set things up with docker-machine, I noticed that I didn't have the certs on my laptop. Logging in to the VM, I found out that there are no certs in /etc/docker. I also noticed that the docker image on the server is pretty old (1.8.1).
How can I upgrade docker to the latest version on this VM? Would I lose my VMs if I did so? I'm not sure how to deal with this "Azure Docker extension".
Would this also re-generate the certs in /etc/docker, so that I can set up docker-machine?

One way to update the certs from the Azure portal : you can add the docker extension again on your VM and in the extension options you can specify the new certificates you want to use. You will probably also have to reboot the VM after re-applying the extension to make it work properly.

Related

Authenticate private registry (ACR) with Minikube on developer machine

At my company, we're setting up an on-prem k8s cluster in combination with a private image repository hosted on Azure (Azure Container Registry).
For development purposes, I'd like our developers to be able to run the apps they create locally using minikube. ACR offers several ways to authenticate, including:
Indiviual login
Access Token
Service Principal
When developing locally, and using the Docker CLI, individual authentication can be setup by running az acr login -n my-repository.azurecr.io. We manage all SSO authentication through Azure Active Directory, and Docker Desktop comes with the docker-credentials-wincred.exe extension, to delegate handling of authentication to the Windows Credential Store. This is specified in ~\.docker\config.json. Pretty neat and seamless, love it.
However, when authenticating k8s to work with a ACR, most documentation steers you towards setting up a Service Principal, and storing credentials in a k8s secret. For a production environment this makes perfect sense, but for a developer machine it feels a bit weird to authenticate using headless credentials.
An alternative is to pull images manually, and set imagePullPolicy to IfNotPresent in the k8s manifest. The tricky part here is that Minikube runs in a separate VM, with its own Docker daemon. By default, Docker CLI is not configured to connect with this daemon. Minikube exposes some control over the daemon through the minikube CLI (e.g. minikube image pull) but here we run into authorization issues. Alternatively, we can configure the Docker CLI - which is already configured to use the Windows credentials store - to connect to the minikube daemon. This is pretty straight-forward using minikube docker-env.
So, this works... but my question is: isn't there a more sensible way to do this? Can't minikube image somehow be configured to work with the windows credentials store?

Azure packages disappearing after installing it via ssh

I install supervisord and mysql via azure ssh.
But after a few days, all my installed packages are disappeared, and all my tables in mysql are also gone.
It seems that the server is updated regularly, but my files are fine.
Do i need to set somethings?
You are using a PaaS service, not a virtual machine and you can't install applications like MySQL on it. Well, Azure let you do it but as you experienced, when Azure does its maintenance, your app will/can be moved to another host. This is normal behavior.
One solution on App Service is to use containers and a Docker Compose file.

Is it possible to deploy a devcontainer as a private ACI?

Ive been attempting to deploy and connect remotely to an azure container instance running in a private network in azure (with a VPN set up).
I have no problem accessing the container using the aci docker context or directly using the exposed services (I have http and vnc set up on the container).
However, the end goal is to use the container as a remote visual-studio-code development container - with a git repo mounted on the container.
Having trouble figuring out how to do this... From reading the docs it almost seems as if setting up ssh would be the only way to do this but then it seems like I have to set up my own docker host instead of creating the docker as an ACI.
Has anyone done this before? Is it possible?

How to pull docker image in Ubuntu Server from Azure Container Registry (Site to site connection)?

I want to pull a container image from Azure Container Registry. I'm using Ubuntu Server 18.04 which don't have internet connection (we can't enable internet due to some requirements) but we have site to site connectivity from Ubuntu Server to Azure resource.
If I use
docker pull myregistry.azurecr.io/container_name:latest
It makes a call to https://myregistry.azurecr.io which is connecting to internet and we don't want that.
Any other way of doing it in private manner.
Let me know if you need more details.
You can use the private endpoint to access the Azure Container Registry from the VNet, and you can add the rules to allow or deny access from the site to site. Here are steps to add the rules.

Keycloak docker

how can I deploy Keycloak docker in an azure container instance?
keycloak docker image that is provided by jboss/keyckloak keeps restarting in azure container instance after deployment. need help
You don`t need to deploy keycloak in the Azure container registry. You can use jboss/keyckloak docker image. In my experience, reload might happen in case of resource luck or wrong configurations. Try to pick a bigger VM.

Resources