I have 4 Microservices they all have different names different images and different container ports and service ports, I took this piece of code in one of the answer from stack overflow now what this piece of code is doing is creating my 4 deployments with it's names and images but I am unable to create 4 container according to their ports and resources.
my main goal is to create a master template where I can just put few values and it can handle new manifest of new microservice instead of play with bunch of manifest separately.
deployment.yaml
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ . }}
labels:
environment: {{ $.Values.environment }}
app: {{ . }}
aadpodidbinding: podid-{{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
replicas: {{ $.Values.replicaCount }}
selector:
matchLabels:
app: {{ . }}
template:
metadata:
labels:
app: {{ . }}
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: {{ . }}
image: mycr.azurecr.io/master/{{ . }}:{{ $.Values.image.tag }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
resources:
{{- range $.Values.high.resources }}
---
{{- end }}
{{- end }}
{{ end }}
values.yaml
replicaCount: 1
image:
# repository: nginx
pullPolicy: IfNotPresent
# # Overrides the image tag whose default is the chart appVersion.
tag: "latest"
componentTests:
- service01
- service02
- service03
- service04
environment: QA
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
services:
- service01
- service02
- service03
# serviceAccount:
# # Specifies whether a service account should be created
# create: true
# # Annotations to add to the service account
# annotations: {}
# # The name of the service account to use.
# # If not set and create is true, a name is generated using the fullname template
# name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# service:
# type: ClusterIP
# port: 80
# ingress:
# enabled: false
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# hosts:
# - host: chart-example.local
# paths: []
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
high:
resources:
requests:
cpu: 350m
memory: 800Mi
limits:
cpu: 400m
memory: 850Mi
medium:
resources:
requests:
cpu: 200m
memory: 650Mi
limits:
cpu: 250m
memory: 700Mi
low:
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 150m
memory: 550Mi
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 4
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
tolerations: []
affinity: {}
output
MANIFEST:
---
# Source: test/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
environment: QA
app: service01
aadpodidbinding: podid-service01
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02
labels:
environment: QA
app: service02
aadpodidbinding: podid-service02
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02
template:
metadata:
labels:
app: service02
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02
image: mycr.azurecr.io/master/service02:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02-ui
labels:
environment: QA
app: service02-ui
aadpodidbinding: podid-service02-ui
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02-ui
template:
metadata:
labels:
app: service02-ui
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02-ui
image: mycr.azurecr.io/master/service02-ui:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service03
labels:
environment: QA
app: service03
aadpodidbinding: podid-service03
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service03
template:
metadata:
labels:
app: service03
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service03
image: mycr.azurecr.io/master/service03:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
service01.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
aadpodidbinding: podid-service01
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
aadpodidbinding: podid-service01
annotations:
build: "2020102901"
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
resources:
requests:
cpu: 250m
memory: "700Mi"
limits:
memory: "700Mi"
ports:
- containerPort: 7474
env:
- name: KEY_VAULT_ID
value: "key-vault"
- name: AZURE_ACCOUNT_NAME
value: "storage"
readinessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
livenessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: service01
spec:
ports:
- port: 7474
name: main
# - port: 9999
# name: health
selector:
app: service01
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: service01
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: service01
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I don't know in what context that Helm chart was created, but from what I understand it was used for Integration testing or something like that. It is certainly not what you want to use for your use case. You better use an Helm Chart that generate the manifests for only one services and then you'll be able to reuse that Chart for all your services. It means that you will do multiple helm install with different values instead of one helm install that created all your services. With a big chart like that, you'll need to update the chart everytime you add a new services.
You'll have :
helm install -f service01-values.yaml ./mychart
helm install -f service02-values.yaml ./mychart
helm install -f service03-values.yaml ./mychart
helm install -f service04-values.yaml ./mychart
instead of :
helm install -f values.yaml ./mychart
To be able to do this, you'll need to change your chart a little bit and remove the loop {{- range .Values.componentTests }}. Learn how to build a chart, it is easyer that you think : Create Your First Helm Chart
Related
I am trying to deploy an image in k3s but I am getting an error like this. I have made sure that there is no syntax error. I have also added match labels in my spec, but don't know what causing the issue
spec.selector: Required value
spec.template.metadata.labels: Invalid value: map[string]string{...} selector does not match template labels
This is my yaml file
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: darwin_tritron_server
name: darwin_tritron_server
spec:
replicas: 3
selector:
matchLabels:
app: darwin_tritron_server
template:
metadata:
labels:
app: darwin_tritron_server
spec:
containers:
-
args:
- "cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false"
command:
- /bin/sh
- "-c"
image: "nvcr.io/nvidia/tritonserver:20.12-py3"
name: flower
ports:
-
containerPort: 8000
name: http-triton
-
containerPort: 8001
name: grpc-triton
-
containerPort: 8002
name: metrics-triton
resources:
limits:
nvidia.com/mig-1g.5gb: 1
volumeMounts:
-
mountPath: /models
name: models
volumes:
-
name: models
nfs:
path: <path/to/flowerdemo/model/files>
readOnly: false
server: "<IP address of the server>"
Any help would be appreciated
Wrong yaml indent in your spec, try:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: darwin_tritron_server
name: darwin_tritron_server
spec:
replicas: 3
selector:
matchLabels:
app: darwin_tritron_server
template:
metadata:
labels:
app: darwin_tritron_server
spec:
containers:
- args:
- "cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false"
command:
- /bin/sh
- "-c"
image: "nvcr.io/nvidia/tritonserver:20.12-py3"
name: flower
ports:
- containerPort: 8000
name: http-triton
- containerPort: 8001
name: grpc-triton
- containerPort: 8002
name: metrics-triton
resources:
limits:
nvidia.com/mig-1g.5gb: 1
volumeMounts:
- mountPath: /models
name: models
volumes:
- name: models
nfs:
path: <path/to/flowerdemo/model/files>
readOnly: false
server: "<IP address of the server>"
I am new to Azure Kubernetes Service. I have created an Azure Kubernetes cluster and tried to deploy some workload in it. The .yaml file as follows
- apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: 'yes'
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: azure-vote-back
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
When I deploy this .yaml via Azure CLI it gives me a validation error but doesn't indicate where is it? When I run the kubectl apply -f ./filename.yaml --validate=false it gives "cannot unmarshal array into Go value of type unstructured.detector" error. However, when I run the same yaml in Azure portal UI it runs without any error. Appreciate if someone can mention the reason for this and how to fix this.
I tried to run the code you have provided in the Portal as well as Azure CLI. It successfully got created in Portal UI by adding the YAML code but using Azure CLI I received the same error as you :
After doing some modifications in your YAML file and validating it , I ran the same command again and it successfully got deployed in Azure CLI:
YAML File:
apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
Output:
Per the title, I have the integration working following the documentation.
I can deploy the nginx.yaml and after about 70 seconds I can print out secrets with:
kubectl exec -it nginx -- cat /mnt/secrets-store/secret1
Now I'm trying to apply it to a PostgreSQL deployment for testing and I get the following from the Pod description:
Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "secrets-store01-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/postgres-deployment-staging-69965ff767-8hmww, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to get keyvault client: failed to get key vault token: nmi response failed with status code: 404, err: <nil>
And from the nmi logs:
E0221 22:54:32.037357 1 server.go:234] failed to get identities, error: getting assigned identities for pod staging/postgres-deployment-staging-69965ff767-8hmww in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0221 22:54:32.037409 1 server.go:192] status (404) took 80003389208 ns for req.method=GET reg.path=/host/token/ req.remote=127.0.0.1
Not sure why since I basically copied the settings from the nignx.yaml into the postgres.yaml. Here they are:
# nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-staging
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-staging
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-staging
persistentVolumeClaim:
claimName: postgres-storage-staging
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-staging
namespace: staging
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Suggestions for what the issue is here?
Oversight on my part... the aadpodidbinding should be in the template: per:
https://azure.github.io/aad-pod-identity/docs/best-practices/#deploymenthttpskubernetesiodocsconceptsworkloadscontrollersdeployment
The resulting YAML should be:
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-production
namespace: production
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/DEV-PGDATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/DEV-PGUSER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/DEV-PGPASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-production
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-production
persistentVolumeClaim:
claimName: postgres-storage-production
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-production
namespace: production
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Adding template in spec will resolve the issue, use label "aadpodidbinding: "your azure pod identity selector" in the template labels section in deployment.yaml file
sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET
valueFrom:
secretKeyRef:
name: test-secret
key: key
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev-1spc
I ' m trying to mount a directory to my pods but always it shows me an error "no file or directory found"
This is my yaml file used for the deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp1-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: test-mount-1
persistentVolumeClaim:
claimName: task-pv-claim-1
containers:
- name: myapp
image: 192.168.11.168:5002/dev:0.0.1-SNAPSHOT-6f4b1db
command: ["java -jar /jar/myapp1-0.0.1-SNAPSHOT.jar --spring.config.location=file:/etc/application.properties"]
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/etc/application.properties"
#subPath: application.properties
name: test-mount-1
# hostNetwork: true
imagePullSecrets:
- name: regcred
#volumes:
# - name: test-mount
and this is the persistance volume config :
kind: PersistentVolume
apiVersion: v1
metadata:
name: test-mount-1
labels:
type: local
app: myapp
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/share"
and this the claim volume config :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-1
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
and this for the service config used for the deployment :
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
externalIPs:
- 192.168.11.145
ports:
- protocol: TCP
port: 8080
nodePort: 31000
type: LoadBalancer
status:
loadBalancer:
ingress:
If any one can help , I will be grateful and thanks .
You haven't included your storage class in your question, but I'm assuming you're attempting local storage on a node. Might be a simple thing to check, but does the directory exist on the node where your pod is running? And is it writeable? Depending on how many worker nodes you have, it looks like your pod could be running on any node, and the pv isn't set to any particular node. You could use node affinity to ensure that your pod runs on the same node that contains the directory referenced in your pv, if that's the issue.
Edit, if it's nfs, you need to change your pv to include:
nfs:
path: /mnt/share
server: <nfs server node ip/fqdn>
Example here
I am working on deploying one of my application to the Azure Kubernetes.
I have ACR and AKS configured, I am trying the deployment through azure CLI.
Here is the kubernetes deployment file content
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: pocaksimage1
ports:
- containerPort: 6379
name: pocaksimage1
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1
spec:
ports:
- port: 6379
selector:
app: pocaksimage1
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: repo
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: PRE_PROD
value: "pocaksimage1"
imagePullSecrets:
- name: pocsecret
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: pocaksimage1-front
The error I am getting is "0/1 nodes are available: 1 node(s) didn't match node selector."
Please help me to get this resolved.
Thanks
I suppose the issue is with the fact AKS doesnt yet support windows nodes, so you dont really have windows nodes. You can create AKS with windows nodes, but its in preview at this point in time.
https://github.com/Azure/AKS/blob/master/previews.md#windows