Windows container from private Azure registry does not start in AKS - azure

I have a private Azure Container registry that contains two containers, a windows based (mcr.microsoft.com/dotnet/core/samples:aspnetapp) and a linux based (a custom test). I created a secret etc. which seems ok. When I try to deploy those with kubernetes the following happens:
The linux based from the private repo starts normally
The windows based container from docker hub starts normally
The SAME windows based container from the private repo throws an error : Back-off pulling image "spintheblackcircleshop.azurecr.io/aspnetapp"
Anyone?
-
test.yaml:
apiVersion: v1
items:
# basplus deployment
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private
spec:
replicas: 1
template:
metadata:
labels:
app: private
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-public
spec:
replicas: 1
template:
metadata:
labels:
app: public
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private-sleep
spec:
replicas: 1
template:
metadata:
labels:
app: private-sleep
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/danielm-test-sleep
imagePullSecrets:
- name: mysecret
# end
kind: List
metadata: {}

AKS doesnt support windows nodes yet. There is no way to run windows containers in AKS at the time of writing (05/05/2019).
edit: fair point raised by the other answer. you actually can run windows containers in aci in aks, but it's not exactly in aks :)

Well, AKS does not support windows node currently, but you can just run windows container in it when you install the virtual kubelet in the AKS. It takes advantage of the ACI.
See the steps that install the virtual kubelet and run windows container in the document Use Virtual Kubelet with Azure Kubernetes Service (AKS).

Related

Mount a shared Azure disk in Azure Kubernetes to multiple windows PODs

I want to attach a shared disk to multiple windows containers on AKS.
From post learned that it can be done for Linux containers.
I am trying to do the same with windows container but it's failing to mount a shared disk, with below error
MapVolume.MapPodDevice failed for volume "pvc-6e07bdca-2126-4a5b-806a-026016c3798d" : rpc error: code = Internal desc = Could not mount "2" at "\var\lib\kubelet\plugins\kubernetes.io\csi\volumeDevices\publish\pvc-6e07bdca-2126-4a5b-806a-026016c3798d\4e44da87-ea33-4d85-a7db-076db0883bcf": rpc error: code = Unknown desc = not an absolute Windows path: 2
Error occured
Used below to dynamically provision Shared Azure Disk
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi-custom
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS
maxShares: "2"
cachingMode: None
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
volumeMode: Block
storageClassName: managed-csi-custom
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: test-shared-disk
template:
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
nodeSelector:
role: windowsgeneral
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
volumeDevices:
- name: azuredisk
devicePath: "D:\test"
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk-dynamic
Is it possible to mount shared disk for windows container on AKS? Thanks for help.
Azure shared disks is an Azure managed disks feature that enables attaching an Azure disk to agent nodes simultaneously. But it doesn't apply for window node pool only
To overcome this issue or mounting Azure Disk CSI driver to window node you need to provisoned or create the window node pool first.
Please refer this MS tutorial to add a Windows node pool.
After you have a Windows node pool, you can now use the same built-in storage classes managed-csi to mount the DISK.
For more information and Validating Volume Mapping you can refer this MS Document

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

k8s deploymet with image from private registry [duplicate]

This question already has answers here:
Pull image Azure Container Registry - Kubernetes
(2 answers)
Kubernetes pull from multiple private docker registries
(1 answer)
Can anyone please guide how to pull private images from Kubernetes?
(2 answers)
Closed 2 years ago.
I've k8s deployment yaml which I need to pull image
from private registry
where should I put the
host
user
password
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
I found this but it doesnt really helps
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
The doc contains detailed about pulling secrets from private registry.
Summary is,
Create secret using following command
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Than specify the secret name in your deployment file by adding following line
imagePullSecrets:
- name: regcred
So, Create a secret and modify your deployment like
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
imagePullSecrets:
- name: regcred
If you want to create secret from file then update following file into secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: <namespace>
data:
.dockerconfigjson: < add here output of cat ~/.docker/config.json| base64 -w 0 >
type: kubernetes.io/dockerconfigjson
then run kubectl apply -f secret.yaml

Azure kubernetes service loadbalancer external IP not accessible

I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.

Pull image Azure Container Registry - Kubernetes

Does anyone have any advice on how to pull from Azure container registry whilst running within Azure container service (kubernetes)
I've tried a sample deployment like the following but the image pull is failing:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
I got this working after reading this info.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
So firstly create the registry access key
kubectl create secret docker-registry myregistrykey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
Replacing the server address with the address of your ACR address and the USERNAME, PASSWORD and EMAIL address with the values from the admin user for your ACR. Note: The email address can be value.
Then in the deploy you simply tell kubernetes to use that key for pulling the image like so:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
imagePullSecrets:
- name: myregistrykey
This is something we've actually made easier. When you provision a Kubernetes cluster through the Azure CLI, a service principal is created with contributor privileges. This will enable pull requests of any Azure Container Registry in the subscription.
There was a PR: https://github.com/kubernetes/kubernetes/pull/40142 that was merged into new deployments of Kubernetes. It won't work on existing kubernetes instances.

Resources