Pull image Azure Container Registry - Kubernetes - azure

Does anyone have any advice on how to pull from Azure container registry whilst running within Azure container service (kubernetes)
I've tried a sample deployment like the following but the image pull is failing:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000

I got this working after reading this info.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
So firstly create the registry access key
kubectl create secret docker-registry myregistrykey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
Replacing the server address with the address of your ACR address and the USERNAME, PASSWORD and EMAIL address with the values from the admin user for your ACR. Note: The email address can be value.
Then in the deploy you simply tell kubernetes to use that key for pulling the image like so:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
imagePullSecrets:
- name: myregistrykey

This is something we've actually made easier. When you provision a Kubernetes cluster through the Azure CLI, a service principal is created with contributor privileges. This will enable pull requests of any Azure Container Registry in the subscription.
There was a PR: https://github.com/kubernetes/kubernetes/pull/40142 that was merged into new deployments of Kubernetes. It won't work on existing kubernetes instances.

Related

Memory Sev3 alerts in Azure kubernetes cluster with default resource allocations for 2 pods - mySQL db and Spring app

I deployed few days ago 2 services into Azure Kubernetes cluster. I set up cluster with 1 node, virtual machine parameters: B2s: 2 Cores, 4 GB RAM, 8 GB Temporary storage. Then I placed 2 pods on the same node:
MySQL database with 4Gib storage persistent volume, 5 tables at the moment
Spring boot java application
There is no replicas.
Take a look on kubectl output regarding the deployed pods:
The purpose is to create internal application in company where I work which will be used by company team. There won't be a lot of data in DB.
When we started to test connection with API from front-end I received memory alert like below:
mySQL deployment yaml file looks like:
apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: LoadBalancer
ports:
- port: 3307
targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
Spring app deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-api-testing
namespace: testing
labels:
app: spring-app-api-testing
spec:
replicas: 1
selector:
matchLabels:
app: spring-app-api-testing
template:
metadata:
labels:
app: spring-app-api-testing
spec:
containers:
- name: spring-app-api-testing
image: techradaracr.azurecr.io/technology-radar-be:$(Build.BuildId)
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
- name: MYSQL_PORT
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_port
- name: MYSQL_HOST
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_host
nodeSelector:
env: preprod
---
apiVersion: v1
kind: Service
metadata:
labels:
app: spring-app-api-testing
k8s-app: spring-app-api-testing
name: spring-app-api-testing-service
namespace: testing
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
type: LoadBalancer
selector:
app: spring-app-api-testing
First I deployed MySQl database, then java Spring API.
I guess the problem is with default resource allocation and MySQL db is using 90 % of overall RAM memory. That's why I'm receiving memory alert.
I know that there are sections for resources allocation in yaml config:
resources:
requests:
cpu: 250m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
for minimum and maximum cpu and memory resources. Question is how many of them should I allocate for spring app and how many for mySQL database in order to avoid memory problems?
I would be grateful for help.
First of all, running the whole cluster on only one VM defeats the purpose of using Kubernetes, specially that you are using a small SKU for the VMSS. Have you considered running the application outside of k8s ?
To answer your question, there is no given formula or set values for the request/limits. The values you choose for resource requests and limits will depend on the specific requirements of your application and the resources available in your cluster.
In detail, you should consider the workload characteristics, cluster capacity, performance (if the values are too small, the application will struggle) and cost.
Please refer to the best practices here: https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management

Kubernetes Crashloopbackoff With Minikube

So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Application run on Azure AKS Kubernates can not be accessed from App Service

I have configured two different applications ( SEQ and MockServer ) on Azure AKS service. They are both working correctly from internet but can not access them from Azure Web Service. It also can not be accessed from Azure CLI.
Below my
apiVersion: apps/v1
kind: Deployment
metadata:
name: mockserver-deployment
labels:
app: mockserver
spec:
replicas: 1
selector:
matchLabels:
app: mockserver
template:
metadata:
labels:
app: mockserver
spec:
containers:
- name: mockserver
image: jamesdbloom/mockserver
env:
- name: LOG_LEVEL
value: "INFO"
ports:
- containerPort: 1080
imagePullSecrets:
- name: my-secret
---
kind: Service
apiVersion: v1
metadata:
name: mockserver-service
spec:
selector:
app: mockserver
loadBalancerIP: 51.136.53.26
type: LoadBalancer
loadBalancerSourceRanges:
# from Poland
- 62.87.152.154/32
- 83.30.150.205/32
- 80.193.73.114/32
- 195.191.163.0/24
# from AppCenter test
- 195.249.159.0/24
- 195.0.0.0/8
# from Marcin K home
- 95.160.157.0/24
- 93.105.0.0/16
ports:
- port: 1080
targetPort: 1080
name: mockserver
The best approach is to use VNET integration for your AppService (https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet) combined with an internal LoadBalancer-type Service (https://learn.microsoft.com/en-us/azure/aks/internal-lb). This way the communication between the app service and AKS will flow only via the internal VNET. Note that you can have also an external LB service like the one you already have; you can have multiple services serving traffic to the same set of pods.

Azure kubernetes service loadbalancer external IP not accessible

I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.

How to get a external IP of VM running Kubernet Services

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

Resources