So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Related
I deployed few days ago 2 services into Azure Kubernetes cluster. I set up cluster with 1 node, virtual machine parameters: B2s: 2 Cores, 4 GB RAM, 8 GB Temporary storage. Then I placed 2 pods on the same node:
MySQL database with 4Gib storage persistent volume, 5 tables at the moment
Spring boot java application
There is no replicas.
Take a look on kubectl output regarding the deployed pods:
The purpose is to create internal application in company where I work which will be used by company team. There won't be a lot of data in DB.
When we started to test connection with API from front-end I received memory alert like below:
mySQL deployment yaml file looks like:
apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: LoadBalancer
ports:
- port: 3307
targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
Spring app deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-api-testing
namespace: testing
labels:
app: spring-app-api-testing
spec:
replicas: 1
selector:
matchLabels:
app: spring-app-api-testing
template:
metadata:
labels:
app: spring-app-api-testing
spec:
containers:
- name: spring-app-api-testing
image: techradaracr.azurecr.io/technology-radar-be:$(Build.BuildId)
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
- name: MYSQL_PORT
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_port
- name: MYSQL_HOST
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_host
nodeSelector:
env: preprod
---
apiVersion: v1
kind: Service
metadata:
labels:
app: spring-app-api-testing
k8s-app: spring-app-api-testing
name: spring-app-api-testing-service
namespace: testing
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
type: LoadBalancer
selector:
app: spring-app-api-testing
First I deployed MySQl database, then java Spring API.
I guess the problem is with default resource allocation and MySQL db is using 90 % of overall RAM memory. That's why I'm receiving memory alert.
I know that there are sections for resources allocation in yaml config:
resources:
requests:
cpu: 250m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
for minimum and maximum cpu and memory resources. Question is how many of them should I allocate for spring app and how many for mySQL database in order to avoid memory problems?
I would be grateful for help.
First of all, running the whole cluster on only one VM defeats the purpose of using Kubernetes, specially that you are using a small SKU for the VMSS. Have you considered running the application outside of k8s ?
To answer your question, there is no given formula or set values for the request/limits. The values you choose for resource requests and limits will depend on the specific requirements of your application and the resources available in your cluster.
In detail, you should consider the workload characteristics, cluster capacity, performance (if the values are too small, the application will struggle) and cost.
Please refer to the best practices here: https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management
I have a AKS cluster deployed with an Application Gateway. These are all docker images running on the AKS cluster with a simple ingress. They all run on the same default namespace. One is a Vue frontend, the second is a Java spring backend, the last being a Fullstack Tomcat image. All three services ran completely fine without any issues; however, today all of the 3 services gave back a 404 error (without any changes to the cluster to my knowledge).
When testing for the health of these services, I was still able to kubectl to all the services. In addition, the Health Probe on Azure Application Gateway returned a healthy 200 status code for all three services.
I have tried removing the workloads and adding them back again. I have tried removing the services then adding them back again. I have done the same for the ingress as well.
I have also tried setting up the entire cluster from a fresh AKS cluster from scratch from a new AKS cluster, all three images perform without any issues.
The Yaml for a application on the AKS network looks like:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: testvuefe
spec:
replicas: 2
selector:
matchLabels:
app: testvuefe
template:
metadata:
labels:
app: testvuefe
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: testvuefe
image: testdocker.azurecr.io/testvuefe:latest
ports:
- containerPort: 80
resources:
requests:
cpu: '0'
memory: '0'
limits:
cpu: '128'
memory: 512G
volumeMounts:
- mountPath: "/fileShare"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: aks-azurefile
- apiVersion: v1
kind: Service
metadata:
name: testvuefe-service
spec:
type: ClusterIP
ports:
- targetPort: 80
name: port80
port: 80
protocol: TCP
selector:
app: testvuefe
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testvuefe-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: /user
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
spec:
rules:
- host: test.wbsoft.co.kr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testvuefe-service
port:
number: 80
As this issue as occurred once on a cluster, I need to understand what is causing these issues so that it does not happen during production. It would also be great if a solution can be found for the issue.
I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!
I am following this document.
https://github.com/Azure/DevOps-For-AI-Apps/blob/master/Tutorial.md
The CICD pipeline works fine. But I want to validate the application using the external ip that is being deployed to Kubernete cluster.
Deploy.yaml
apiVersion: v1
kind: Pod
metadata:
name: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:156
ports:
- containerPort: 88
imagePullSecrets:
- name: imageclassificationappdemosecret
Pod details
C:\Users\nareshkumar_h>kubectl describe pod imageclassificationapp
Name: imageclassificationapp
Namespace: default
Node: aks-nodepool1-97378755-2/10.240.0.5
Start Time: Mon, 05 Nov 2018 17:10:34 +0530
Labels: new-label=imageclassification-label
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"imageclassificationapp","namespace":"default"},"spec":{"containers":[{"image":"crr...
Status: Running
IP: 10.244.1.87
Containers:
model-api:
Container ID: docker://db8687866d25eb4311175c5ccb5a7205379168c64cdfe716b09557fc98e2bd6a
Image: crrq51278013.azurecr.io/model-api:156
Image ID: docker-pullable://crrq51278013.azurecr.io/model-api#sha256:766673989a59fe0b1e849469f38acda96853a1d84e4b4d64ffe07810dd5d04e9
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 05 Nov 2018 17:12:49 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qhdjr (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qhdjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qhdjr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Service details:
C:\Users\nareshkumar_h>kubectl describe service imageclassification-service
Name: imageclassification-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.0.24.9
LoadBalancer Ingress: 52.163.191.28
Port: <unset> 88/TCP
TargetPort: 88/TCP
NodePort: <unset> 32672/TCP
Endpoints: 10.244.1.65:88,10.244.1.88:88,10.244.2.119:88
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I am hitting the below url but the request times out.
http://52.163.191.28:88/
Can some one please help? Please let me know if you need any further details.
For your issue, I did a test and it worked in my side. The yaml file here:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 88
targetPort: 80
selector:
app: nginx
And there are some points should pay attention to.
You should make sure which port the service listen to in the container. For example, in my test, the nginx service listens to port 80 default.
The port that you want to expose in the node should be idle. In other words, the port was not used by other services.
When all the steps have done. You can access the public IP with the port you have exposed in the node.
The screenshots show the result of my test:
Hope this will help you!
We are able to solve this issue after reconfiguring Kubernetes Service with Right configuration and changing deploy.yaml file as follows -
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: imageclassificationapp
spec:
selector:
matchLabels:
app: imageclassificationapp
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:205
ports:
- containerPort: 88
---
apiVersion: v1
kind: Service
metadata:
name: imageclassificationapp
spec:
type: LoadBalancer
ports:
- port: 85
targetPort: 88
selector:
app: imageclassificationapp
We can close this thread now.
i'm trying to mount a persistent volume into my windows container, but i alwys get this error:
Unable to mount volumes for pod "mssql-with-pv-deployment-3263067711-xw3mx_default(....)": timeout expired waiting for volumes to attach/mount for pod "default"/"mssql-with-pv-deployment-3263067711-xw3mx". list of unattached/unmounted volumes=[blobdisk01]
i've created a github gist with the console output of "get events" and "describe sc | pvc | po" maybe someone will find the solution with it.
Below are my scripts that I'm using for deployment.
my storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-disk-sc
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
my PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-disk-pvc
spec:
storageClassName: azure-disk-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
and the deployment of my container:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-with-pv-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql-with-pv
spec:
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 10
containers:
- name: mssql-with-pv
image: testacr.azurecr.io/sql/mssql-server-windows-developer
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: "c:/volume"
name: blobdisk01
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: azure-disk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mssql-with-pv-deployment
spec:
selector:
app: mssql-with-pv
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
what am i doing wrong? is there another way to mount a volume?
thank for every help :)
I would try:
Change API version to v1: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk
kubectl get events to see you if have a more detailed error (I could figure out the reason when I used NFS watching events)
maybe is this bug, I read in this post?
You will need a new volume in D: drive, looks like folders in C: are not supported for Windows Containers, see here:
https://github.com/kubernetes/kubernetes/issues/65060
Demos:
https://github.com/andyzhangx/demo/tree/master/windows/azuredisk