How to apply a yaml in a Helm pre-delete hook - hook

I want to apply yaml in a Helm pre-delete hook kind of like here (https://helm.sh/docs/topics/charts_hooks/)
I have this yaml that I want to apply in a Helm hook
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-backup-name # unique name of backup
namespace: dev
spec:
backupLocation: s3backup-default
namespaces:
- stage-database
reclaimPolicy: Delete
selectors:
cluster-name: mycluster
preExecRule:
postExecRule:
Is it possible to just apply a Yaml in a hook? The following is what I have:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: database-backup-job
image: "alpine:3.3"
command: ["/bin/sleep","{{ default "10" .Values.sleepyTime }}"]
So instead of a image I want to execute the yaml. Is that possible?
Update:
I added that yaml and added the annotations with hook but it just doesn't seem to do anything when I do helm delete on the app:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-bkup-june-3-2021 # unique name of backup
namespace: {{ $dbNamespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backupLocation: s3backup-default
namespaces:
- {{ $dbNamespace }}
reclaimPolicy: Delete
selectors:
cluster-name: {{ $dbClusterName }}
preExecRule:
postExecRule:

Related

How to add a domain into an Ingress controller in helm for a kubernetes deployment?

I'm looking into a new update to my kubernetes cluster in Azure. However, I'm not sure how to do this. I have been able to build an ingress controller like this one:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "test.fullname" . -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "test.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ .port }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ .port }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
My values is the following:
replicaCount: 1
image:
repository: test01.azurecr.io/test
tag: update1
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2000
targetPort: http
protocol: TCP
ingress:
enabled: true
className: ""
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
hosts:
- host: test.com
paths:
- path: /test
pathType: Prefix
port: 80
tls: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
My pod is ready and it seems that the service is ready. However, the test.com domain is not working. I added a DNS record for my domain and I used my cluster's IP to make sure the domain will be available. However, I still have an issue to see the domain the error message is the following:
Connection timed out && This site can’t be reached
Does anyone knows any better workaround to this?
In Kubernetes you have Ingress Controllers and Ingress resources. What you have is the definition of an Ingress, not an Ingress Controller. An Ingress will not work unless there is an Ingress Controller installed in your cluster.
However, in AKS (Azure Kubernetes Service), it is possible to bind your Ingress resources to an Azure Application Gateway, which is an Azure resource outside of your cluster.
To achieve this you need AGIC (Application Gateway Ingress Controller) which will be in charge of forwarding your Ingress configuration to the Application Gateway. You have already achieved this partially by adding these annotations on the Ingress resources you want to have configured there:
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
Summary:
You have two options:
Install an Ingress Controller such as nginx or traefik and adapt the annotations on your Ingress resources accordingly.
Make sure you have an Application Gateway deployed in your subscription, AGIC installed in your cluster, and all the configuration needed to allow AGIC to modify the Application Gateway.
If it is the first time you are working with Ingresses and Azure, I strongly recommend you follow the first option.

Azure Kubernetes - JMeter No X11 DISPLAY variable was set

I am developing a JMeter Dynamic Master-Slave Perf Environment on top of Azure Kubernetes Service. In my JMeter Slave Deployment, the pod is getting into the CrashLoopBackOff state and creating another Pod again and this continues as a loop. While looking at the JMeter Slave logs, I have found out this error ​
An error occurred: No X11 DISPLAY variable was set, but this program performed an operation which requires it.
Currently, I am using Helm to deploy the pods and below are my jmeter-slave-deployment.yaml and values.yaml files.
jmeter-slave-deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.dep.name }}
namespace: perf-platform
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
replicas: {{ .Values.slave.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.dep.name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
containers:
- name: distributed-jmeter-slave
image: "{{ .Values.slave.image }}:{{ .Values.slave.tag }}"
imagePullPolicy: {{ .Values.slave.pullPolicy }}
env:
- name: HEAP
value: "-Xms{{ .Values.slave.heap.xms1.memory }} -Xmx{{ .Values.slave.heap.xms2.memory }}"
ports:
- containerPort: 50000
- containerPort: 1099
resources:
requests:
memory: "{{ .Values.slave.res.req.mem }}"
cpu: "{{ .Values.slave.res.req.cpu }}"
limits:
memory: "{{ .Values.slave.res.lim.mem }}"
cpu: "{{ .Values.slave.res.lim.cpu }}"
values.yaml file
#JMeter Slave Configuration
dep:
name: distributed-jmeter
slave:
replicaCount: 1
image: gsengun/jmeter
tag: 5.4.1
pullPolicy: IfNotPresent
res:
req:
mem: "1024Mi"
cpu: "100m"
lim:
mem: "1024Mi"
cpu: "100m"
heap:
xms1:
memory: "512m"
xms2:
memory: "512m"
No X11 DISPLAY variable was set error means that you're trying to run JMeter in GUI mode and your (or whatever) image you're using doesn't have X server installed/running
I fail to see the appropriate command to run JMeter Slave process so my expectation is that you need to amend your jmeter-slave-deployment.yaml to have command directive specified like:
command: ["jmeter-server"]
if you like to copy and paste:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.dep.name }}
namespace: perf-platform
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
replicas: {{ .Values.slave.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.dep.name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.dep.name }}
spec:
containers:
- name: distributed-jmeter-slave
image: "{{ .Values.slave.image }}:{{ .Values.slave.tag }}"
imagePullPolicy: {{ .Values.slave.pullPolicy }}
command: ["jmeter-server"]
env:
- name: HEAP
value: "-Xms{{ .Values.slave.heap.xms1.memory }} -Xmx{{ .Values.slave.heap.xms2.memory }}"
ports:
- containerPort: 50000
- containerPort: 1099
resources:
requests:
memory: "{{ .Values.slave.res.req.mem }}"
cpu: "{{ .Values.slave.res.req.cpu }}"
limits:
memory: "{{ .Values.slave.res.lim.mem }}"
cpu: "{{ .Values.slave.res.lim.cpu }}"
More information:
Define a Command and Arguments for a Container
JMeter Distributed Testing with Docker

How to read variables from configmaps in kubernetes yml file in Nodejs

We were asked to shift the variables from export key=value to configmaps in the deployment.yml file.
deployment.yml
{% if configmap is defined %}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ prefix }}-{{ project_name }}"
namespace: "{{ namespace }}"
data:
{% for key, value in configmap.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ prefix }}-{{ project_name }}-deployment
namespace: {{ namespace }}
labels:
k8s-app: {{ prefix }}-{{ project_name }}
spec:
progressDeadlineSeconds: 60
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
k8s-app: {{ prefix }}-{{ project_name }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
k8s-app: "{{ prefix }}-{{ project_name }}"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- {{ prefix }}-{{ project_name }}
topologyKey: "kubernetes.io/hostname"
containers:
- name: {{ project_name }}
image: "67567464.dkr.tfr.ap-north-1.amazonaws.com/{{ project_name }}:{{ tag }}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
livenessProbe:
httpGet:
path: /status
port: 4200
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 2
periodSeconds: 8
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /status
port: 4200
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 2
periodSeconds: 5
successThreshold: 1
failureThreshold: 20
envFrom:
{% if configmap is defined %}
- configMapRef:
name: "{{ prefix }}-{{ project_name }}"
{% endif %}
- secretRef:
name: "{{ prefix }}-{{ project_name }}"
resources:
limits:
cpu: '200m'
memory: 300Mi
requests:
cpu: '100m'
memory: 150Mi
nodeSelector:
workloadType: {{ workload_type }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ prefix }}-{{ project_name }}-service
namespace: {{ namespace }}
labels:
k8s-svc: {{ prefix }}-{{ project_name }}-service
spec:
ports:
- port: 4200
targetPort: 4200
protocol: TCP
selector:
k8s-app: {{ prefix }}-{{ project_name }}
type: ClusterIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: {{ prefix }}-{{ project_name }}-service-mapping
namespace: {{ namespace }}
spec:
bypass_auth: true
host: {{ fqdn }}
prefix: /
service: {{ prefix }}-{{ project_name }}-service.{{ namespace }}:4200
timeout_ms: 200000
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ prefix }}-{{ project_name }}-hpa
namespace: {{ namespace }}
spec:
scaleTargetRef:
kind: Deployment
name: {{ prefix }}-{{ project_name }}-deployment
apiVersion: apps/v1
minReplicas: {{ min_replicas }}
maxReplicas: {{ max_replicas }}
targetCPUUtilizationPercentage: 95
vars.yml - where we have all the secrets as below
env: staging
project_name: oracle
prefix: staging
namespace: "{{ prefix }}-nexus"
fqdn: "{{ prefix }}-{{ project_name }}.dummy.in"
tag: "{{ prefix }}-{{ build_number }}"
context: development
profile: default
workload_type: general
env_during_build: True
nocache: "no"
min_replicas: 1
max_replicas: 1
configmap:
BASE_URL: ""
IDENTITY_ENDPOINT: ""
NODE_ENV: "production"
But I am unable to access this variables with code
process.env.IDENTITY_ENDPOINT
However when I log into the pods and run env in the terminal the values are present.
Is there another way or code to read the env variables in this case.
P.S: In elixir I had to change to System.get_env from
Application.get_env(:app_name, :env_vars_name)[:key_name]
Thanks.

Gitlab CICD sets wrong service url in the production environment

After production deployment the application has not the endpoint of the environment.url from the .gitlab-ci.yml, but a combination of the groupname, projectname and basedomain:
<groupname>-<projectname>.basedomain.
The Gitlab project belongs to a Gitlab group, which has an Kubernetes cluster. De group has a basedomain which is used in the .gitlab-ci.yml:
//part of .gitlab-ci.yml
...
apply production secret configuration:
stage: prepare-deploy
extends: .auto-deploy
needs: ["build", "generate production configuration"]
dependencies:
- generate production configuration
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- kubectl create secret generic tasker-secrets-development --from-file=config.tar --dry-run -o yaml | kubectl apply -f -
environment:
name: production
url: http://app.$KUBE_INGRESS_BASE_DOMAIN
action: prepare
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
...
I expected http://app.$KUBE_INGRESS_BASE_DOMAIN as the endpoint for the application.
The Ingress (I removed the minio part):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "appname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version| replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
cert-manager.io/cluster-issuer: {{ .Values.leIssuer }}
acme.cert-manager.io/http01-edit-in-place: "true"
{{- if .Values.ingress.annotations }}
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
{{- with .Values.ingress.modSecurity }}
{{- if .enabled }}
nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$server_name-$request_id"
nginx.ingress.kubernetes.io/modsecurity-snippet: |
SecRuleEngine {{ .secRuleEngine | default "DetectionOnly" | title }}
{{- range $rule := .secRules }}
{{ (include "secrule" $rule) | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.prometheus.metrics }}
nginx.ingress.kubernetes.io/server-snippet: |-
location /metrics {
deny all;
}
{{- end }}
spec:
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
{{- if .Values.service.commonName }}
- {{ template "hostname" .Values.service.commonName }}
{{- end }}
- {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<<<
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- {{ $host }}
{{- end -}}
{{- end }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-cert" (include "fullname" .)) }}
{{- end }}
rules:
- host: {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<
http:
&httpRule
paths:
- path: /
backend:
serviceName: {{ template "fullname" . }}
servicePort: {{ .Values.service.externalPort }}
{{- if .Values.service.commonName }}
- host: {{ template "hostname" .Values.service.commonName }}
http:
<<: *httpRule
{{- end -}}
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- host: {{ $host }}
http:
<<: *httpRule
{{- end -}}
{{- end -}}
What I have done so far:
removed deployment from cluster, cleared the Gitlab runners cache, cleared the Gitlab cluster cache. Deleted the environment (stop and delete). Created a new environment 'production' with the right URL 'Operations>Environments>production>Edit'. After push the url has been replaced with the wrong one.
hard coded the url in Ingress (at the arrows in the snippet), it worked
changed the value in gitlab-ci.yml without http://. No result.
check the use of 'apply production secret configuration' in the gitlab-ci.yml, by adding echo 'message!'. Conclusion: this part of the file is used for production
A CICD variable Settings>CICD: GITLAB_ENVIRONMENT_URL. No effect.
UPDATE:
Maybe the .Values.gitlab.app is used for the URL.
The file .gitlab-ci.yml includes a template which overrides the value.
//.gitlab-ci.yml
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
The override in the template:
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN <<<<<<<<<<<<<<
artifacts:
paths: [environment_url.txt, tiller.log]
when: always

Kubernetes LoadBalancer service on AKS via helm is not accessible

I'm working on a project in which I need to deploy a simple NodeJs application using Kubernetes, Helm and Azure Kubernetes Service.
Here's What I have tried:
My Dockerfile:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
Here's my mychart/values.yaml:
replicaCount: 1
image:
# registry: docker.io
repository: registry-1.docker.io/arycloud/docker-web-app
tag: 0.3
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: LoadBalancer
port: 32000
internalPort: 32000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- name: mychart.local
path: /
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
And my node server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 32000;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world from container.\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Update: Template files:
From templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 32000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 32000
readinessProbe:
httpGet:
path: /
port: 32000
initialDelaySeconds: 3
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
From templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
Update: a screenshot of external IP:
Here's the output of `kubectl get svc node-release-mychart -oyaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2019-01-26T11:28:27Z"
labels:
app.kubernetes.io/instance: node-release
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: mychart
helm.sh/chart: mychart-0.1.0
name: node-release-mychart
namespace: default
resourceVersion: "127367"
selfLink: /api/v1/namespaces/default/services/node-release-mychart
uid: 8031f3b6-215d-11e9-bb89-462a1bcec690
spec:
clusterIP: 10.0.223.27
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32402
port: 32000
protocol: TCP
targetPort: 32000
selector:
app.kubernetes.io/instance: node-release
app.kubernetes.io/name: mychart
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.240.0.7
I have created a cluster on AKS then run the get-credentials command from my mac os terminal and it works fine, then I have tagged and pushed my docker image to dockerhub and the docker container is also working fine, after that I have created a helm chart and update the values.yaml accordingly and run the helm install command, it install my application to aks and the service provide an external IP, in the kubernetes dashboard the pods are in running state but when I try to access my application via Etxernal_IP:80 it doesn't load my application.
Your problem comes from the fact you've added the annotation to use internal load balancer (so not exposed publicly, only available inside vnet). To fix that remove this part from the service definition:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"

Resources