smb/cifs mountOptions failed to apply in kubernetes - azure

What happened:
Unable to use mountOptions for onprem smb mount
What you expected to happen:
Create manifest with mountOptions
how to reproduce it (as minimally and precisely as possible):
apiVersion: apps/v1
kind: Deployment
metadata:
name: "test"
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "smbmount.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "smbmount.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
limits:
memory: "16048Mi"
cpu: "16000m"
volumeMounts:
- name: smb01
mountPath: /smb/01
volumes:
- name: smb01
csi:
driver: file.csi.azure.com
volumeAttributes:
server: 10.10.10.100
shareName: share01
secretName: smbcreds
mountOptions:
- dir_mode=0777
what error getting:
Error: unable to build kubernetes objects from release manifest: error
validating "": error validating data:
ValidationError(Deployment.spec.template.spec.volumes[0].csi.volumeAttributes.mountOptions):
invalid type for io.k8s.api.core.v1.CSIVolumeSource.volumeAttributes:
got "array", expected "string"
Did I am using right place for mountOptions? or anything did I made mistake in deployment file
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 2
image:
repository: prxzzzzjjkkk.azurecr.io/smbtest
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: latest
imagePullSecrets:
- name: acr-pull-secrets
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []

Please modify the deployment.spec.template.spec.volumes.csi.volumeAttributes.mountOptions to a string of comma separated key value pairs, instead of an array.
So your modified Deployment manifest should have:
...
volumes:
- name: smb01
csi:
driver: file.csi.azure.com
volumeAttributes:
server: 10.10.10.100
shareName: share01
secretName: smbcreds
mountOptions: "dir_mode=0777" #correct format
instead of:
...
volumes:
- name: smb01
csi:
driver: file.csi.azure.com
volumeAttributes:
server: 10.10.10.100
shareName: share01
secretName: smbcreds
mountOptions: #incorrect format
- dir_mode=0777
Reference: https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nginx-pod-azurefile-inline-volume.yaml

In error clearly mentioned at a place of CSIVolumeSource.volumeAttributes: got "array", expected "string" it got th Array instead of String
your YAML should be like
volumes:
- name: smb01
csi:
driver: file.csi.azure.com
volumeAttributes:
server: 10.10.10.100
shareName: share01
secretName: smbcreds
mountOptions: dir_mode=0777

Related

Azure Kubernetes - JMeter No X11 DISPLAY variable was set

I am developing a JMeter Dynamic Master-Slave Perf Environment on top of Azure Kubernetes Service. In my JMeter Slave Deployment, the pod is getting into the CrashLoopBackOff state and creating another Pod again and this continues as a loop. While looking at the JMeter Slave logs, I have found out this error ​
An error occurred: No X11 DISPLAY variable was set, but this program performed an operation which requires it.
Currently, I am using Helm to deploy the pods and below are my jmeter-slave-deployment.yaml and values.yaml files.
jmeter-slave-deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.dep.name }}
namespace: perf-platform
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
replicas: {{ .Values.slave.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.dep.name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
containers:
- name: distributed-jmeter-slave
image: "{{ .Values.slave.image }}:{{ .Values.slave.tag }}"
imagePullPolicy: {{ .Values.slave.pullPolicy }}
env:
- name: HEAP
value: "-Xms{{ .Values.slave.heap.xms1.memory }} -Xmx{{ .Values.slave.heap.xms2.memory }}"
ports:
- containerPort: 50000
- containerPort: 1099
resources:
requests:
memory: "{{ .Values.slave.res.req.mem }}"
cpu: "{{ .Values.slave.res.req.cpu }}"
limits:
memory: "{{ .Values.slave.res.lim.mem }}"
cpu: "{{ .Values.slave.res.lim.cpu }}"
values.yaml file
#JMeter Slave Configuration
dep:
name: distributed-jmeter
slave:
replicaCount: 1
image: gsengun/jmeter
tag: 5.4.1
pullPolicy: IfNotPresent
res:
req:
mem: "1024Mi"
cpu: "100m"
lim:
mem: "1024Mi"
cpu: "100m"
heap:
xms1:
memory: "512m"
xms2:
memory: "512m"
No X11 DISPLAY variable was set error means that you're trying to run JMeter in GUI mode and your (or whatever) image you're using doesn't have X server installed/running
I fail to see the appropriate command to run JMeter Slave process so my expectation is that you need to amend your jmeter-slave-deployment.yaml to have command directive specified like:
command: ["jmeter-server"]
if you like to copy and paste:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.dep.name }}
namespace: perf-platform
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
replicas: {{ .Values.slave.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.dep.name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
containers:
- name: distributed-jmeter-slave
image: "{{ .Values.slave.image }}:{{ .Values.slave.tag }}"
imagePullPolicy: {{ .Values.slave.pullPolicy }}
command: ["jmeter-server"]
env:
- name: HEAP
value: "-Xms{{ .Values.slave.heap.xms1.memory }} -Xmx{{ .Values.slave.heap.xms2.memory }}"
ports:
- containerPort: 50000
- containerPort: 1099
resources:
requests:
memory: "{{ .Values.slave.res.req.mem }}"
cpu: "{{ .Values.slave.res.req.cpu }}"
limits:
memory: "{{ .Values.slave.res.lim.mem }}"
cpu: "{{ .Values.slave.res.lim.cpu }}"
More information:
Define a Command and Arguments for a Container
JMeter Distributed Testing with Docker

How to read variables from configmaps in kubernetes yml file in Nodejs

We were asked to shift the variables from export key=value to configmaps in the deployment.yml file.
deployment.yml
{% if configmap is defined %}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ prefix }}-{{ project_name }}"
namespace: "{{ namespace }}"
data:
{% for key, value in configmap.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ prefix }}-{{ project_name }}-deployment
namespace: {{ namespace }}
labels:
k8s-app: {{ prefix }}-{{ project_name }}
spec:
progressDeadlineSeconds: 60
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
k8s-app: {{ prefix }}-{{ project_name }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
k8s-app: "{{ prefix }}-{{ project_name }}"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- {{ prefix }}-{{ project_name }}
topologyKey: "kubernetes.io/hostname"
containers:
- name: {{ project_name }}
image: "67567464.dkr.tfr.ap-north-1.amazonaws.com/{{ project_name }}:{{ tag }}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
livenessProbe:
httpGet:
path: /status
port: 4200
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 2
periodSeconds: 8
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /status
port: 4200
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 2
periodSeconds: 5
successThreshold: 1
failureThreshold: 20
envFrom:
{% if configmap is defined %}
- configMapRef:
name: "{{ prefix }}-{{ project_name }}"
{% endif %}
- secretRef:
name: "{{ prefix }}-{{ project_name }}"
resources:
limits:
cpu: '200m'
memory: 300Mi
requests:
cpu: '100m'
memory: 150Mi
nodeSelector:
workloadType: {{ workload_type }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ prefix }}-{{ project_name }}-service
namespace: {{ namespace }}
labels:
k8s-svc: {{ prefix }}-{{ project_name }}-service
spec:
ports:
- port: 4200
targetPort: 4200
protocol: TCP
selector:
k8s-app: {{ prefix }}-{{ project_name }}
type: ClusterIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: {{ prefix }}-{{ project_name }}-service-mapping
namespace: {{ namespace }}
spec:
bypass_auth: true
host: {{ fqdn }}
prefix: /
service: {{ prefix }}-{{ project_name }}-service.{{ namespace }}:4200
timeout_ms: 200000
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ prefix }}-{{ project_name }}-hpa
namespace: {{ namespace }}
spec:
scaleTargetRef:
kind: Deployment
name: {{ prefix }}-{{ project_name }}-deployment
apiVersion: apps/v1
minReplicas: {{ min_replicas }}
maxReplicas: {{ max_replicas }}
targetCPUUtilizationPercentage: 95
vars.yml - where we have all the secrets as below
env: staging
project_name: oracle
prefix: staging
namespace: "{{ prefix }}-nexus"
fqdn: "{{ prefix }}-{{ project_name }}.dummy.in"
tag: "{{ prefix }}-{{ build_number }}"
context: development
profile: default
workload_type: general
env_during_build: True
nocache: "no"
min_replicas: 1
max_replicas: 1
configmap:
BASE_URL: ""
IDENTITY_ENDPOINT: ""
NODE_ENV: "production"
But I am unable to access this variables with code
process.env.IDENTITY_ENDPOINT
However when I log into the pods and run env in the terminal the values are present.
Is there another way or code to read the env variables in this case.
P.S: In elixir I had to change to System.get_env from
Application.get_env(:app_name, :env_vars_name)[:key_name]
Thanks.

Azure Key Vault integration with AKS works for nginx tutorial Pod, but not actual project deployment

Per the title, I have the integration working following the documentation.
I can deploy the nginx.yaml and after about 70 seconds I can print out secrets with:
kubectl exec -it nginx -- cat /mnt/secrets-store/secret1
Now I'm trying to apply it to a PostgreSQL deployment for testing and I get the following from the Pod description:
Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "secrets-store01-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/postgres-deployment-staging-69965ff767-8hmww, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to get keyvault client: failed to get key vault token: nmi response failed with status code: 404, err: <nil>
And from the nmi logs:
E0221 22:54:32.037357 1 server.go:234] failed to get identities, error: getting assigned identities for pod staging/postgres-deployment-staging-69965ff767-8hmww in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0221 22:54:32.037409 1 server.go:192] status (404) took 80003389208 ns for req.method=GET reg.path=/host/token/ req.remote=127.0.0.1
Not sure why since I basically copied the settings from the nignx.yaml into the postgres.yaml. Here they are:
# nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-staging
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-staging
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-staging
persistentVolumeClaim:
claimName: postgres-storage-staging
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-staging
namespace: staging
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Suggestions for what the issue is here?
Oversight on my part... the aadpodidbinding should be in the template: per:
https://azure.github.io/aad-pod-identity/docs/best-practices/#deploymenthttpskubernetesiodocsconceptsworkloadscontrollersdeployment
The resulting YAML should be:
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-production
namespace: production
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/DEV-PGDATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/DEV-PGUSER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/DEV-PGPASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-production
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-production
persistentVolumeClaim:
claimName: postgres-storage-production
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-production
namespace: production
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Adding template in spec will resolve the issue, use label "aadpodidbinding: "your azure pod identity selector" in the template labels section in deployment.yaml file
sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET
valueFrom:
secretKeyRef:
name: test-secret
key: key
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev-1spc

Helm Chart iteratively create pods , Containers , ports , service

I have 4 Microservices they all have different names different images and different container ports and service ports, I took this piece of code in one of the answer from stack overflow now what this piece of code is doing is creating my 4 deployments with it's names and images but I am unable to create 4 container according to their ports and resources.
my main goal is to create a master template where I can just put few values and it can handle new manifest of new microservice instead of play with bunch of manifest separately.
deployment.yaml
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ . }}
labels:
environment: {{ $.Values.environment }}
app: {{ . }}
aadpodidbinding: podid-{{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
replicas: {{ $.Values.replicaCount }}
selector:
matchLabels:
app: {{ . }}
template:
metadata:
labels:
app: {{ . }}
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: {{ . }}
image: mycr.azurecr.io/master/{{ . }}:{{ $.Values.image.tag }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
resources:
{{- range $.Values.high.resources }}
---
{{- end }}
{{- end }}
{{ end }}
values.yaml
replicaCount: 1
image:
# repository: nginx
pullPolicy: IfNotPresent
# # Overrides the image tag whose default is the chart appVersion.
tag: "latest"
componentTests:
- service01
- service02
- service03
- service04
environment: QA
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
services:
- service01
- service02
- service03
# serviceAccount:
# # Specifies whether a service account should be created
# create: true
# # Annotations to add to the service account
# annotations: {}
# # The name of the service account to use.
# # If not set and create is true, a name is generated using the fullname template
# name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# service:
# type: ClusterIP
# port: 80
# ingress:
# enabled: false
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# hosts:
# - host: chart-example.local
# paths: []
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
high:
resources:
requests:
cpu: 350m
memory: 800Mi
limits:
cpu: 400m
memory: 850Mi
medium:
resources:
requests:
cpu: 200m
memory: 650Mi
limits:
cpu: 250m
memory: 700Mi
low:
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 150m
memory: 550Mi
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 4
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
tolerations: []
affinity: {}
output
MANIFEST:
---
# Source: test/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
environment: QA
app: service01
aadpodidbinding: podid-service01
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02
labels:
environment: QA
app: service02
aadpodidbinding: podid-service02
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02
template:
metadata:
labels:
app: service02
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02
image: mycr.azurecr.io/master/service02:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02-ui
labels:
environment: QA
app: service02-ui
aadpodidbinding: podid-service02-ui
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02-ui
template:
metadata:
labels:
app: service02-ui
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02-ui
image: mycr.azurecr.io/master/service02-ui:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service03
labels:
environment: QA
app: service03
aadpodidbinding: podid-service03
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service03
template:
metadata:
labels:
app: service03
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service03
image: mycr.azurecr.io/master/service03:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
service01.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
aadpodidbinding: podid-service01
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
aadpodidbinding: podid-service01
annotations:
build: "2020102901"
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
resources:
requests:
cpu: 250m
memory: "700Mi"
limits:
memory: "700Mi"
ports:
- containerPort: 7474
env:
- name: KEY_VAULT_ID
value: "key-vault"
- name: AZURE_ACCOUNT_NAME
value: "storage"
readinessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
livenessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: service01
spec:
ports:
- port: 7474
name: main
# - port: 9999
# name: health
selector:
app: service01
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: service01
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: service01
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I don't know in what context that Helm chart was created, but from what I understand it was used for Integration testing or something like that. It is certainly not what you want to use for your use case. You better use an Helm Chart that generate the manifests for only one services and then you'll be able to reuse that Chart for all your services. It means that you will do multiple helm install with different values instead of one helm install that created all your services. With a big chart like that, you'll need to update the chart everytime you add a new services.
You'll have :
helm install -f service01-values.yaml ./mychart
helm install -f service02-values.yaml ./mychart
helm install -f service03-values.yaml ./mychart
helm install -f service04-values.yaml ./mychart
instead of :
helm install -f values.yaml ./mychart
To be able to do this, you'll need to change your chart a little bit and remove the loop {{- range .Values.componentTests }}. Learn how to build a chart, it is easyer that you think : Create Your First Helm Chart

Kubernetes LoadBalancer service on AKS via helm is not accessible

I'm working on a project in which I need to deploy a simple NodeJs application using Kubernetes, Helm and Azure Kubernetes Service.
Here's What I have tried:
My Dockerfile:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
Here's my mychart/values.yaml:
replicaCount: 1
image:
# registry: docker.io
repository: registry-1.docker.io/arycloud/docker-web-app
tag: 0.3
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: LoadBalancer
port: 32000
internalPort: 32000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- name: mychart.local
path: /
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
And my node server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 32000;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world from container.\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Update: Template files:
From templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 32000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 32000
readinessProbe:
httpGet:
path: /
port: 32000
initialDelaySeconds: 3
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
From templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
Update: a screenshot of external IP:
Here's the output of `kubectl get svc node-release-mychart -oyaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2019-01-26T11:28:27Z"
labels:
app.kubernetes.io/instance: node-release
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: mychart
helm.sh/chart: mychart-0.1.0
name: node-release-mychart
namespace: default
resourceVersion: "127367"
selfLink: /api/v1/namespaces/default/services/node-release-mychart
uid: 8031f3b6-215d-11e9-bb89-462a1bcec690
spec:
clusterIP: 10.0.223.27
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32402
port: 32000
protocol: TCP
targetPort: 32000
selector:
app.kubernetes.io/instance: node-release
app.kubernetes.io/name: mychart
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.240.0.7
I have created a cluster on AKS then run the get-credentials command from my mac os terminal and it works fine, then I have tagged and pushed my docker image to dockerhub and the docker container is also working fine, after that I have created a helm chart and update the values.yaml accordingly and run the helm install command, it install my application to aks and the service provide an external IP, in the kubernetes dashboard the pods are in running state but when I try to access my application via Etxernal_IP:80 it doesn't load my application.
Your problem comes from the fact you've added the annotation to use internal load balancer (so not exposed publicly, only available inside vnet). To fix that remove this part from the service definition:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"

Resources