After production deployment the application has not the endpoint of the environment.url from the .gitlab-ci.yml, but a combination of the groupname, projectname and basedomain:
<groupname>-<projectname>.basedomain.
The Gitlab project belongs to a Gitlab group, which has an Kubernetes cluster. De group has a basedomain which is used in the .gitlab-ci.yml:
//part of .gitlab-ci.yml
...
apply production secret configuration:
stage: prepare-deploy
extends: .auto-deploy
needs: ["build", "generate production configuration"]
dependencies:
- generate production configuration
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- kubectl create secret generic tasker-secrets-development --from-file=config.tar --dry-run -o yaml | kubectl apply -f -
environment:
name: production
url: http://app.$KUBE_INGRESS_BASE_DOMAIN
action: prepare
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
...
I expected http://app.$KUBE_INGRESS_BASE_DOMAIN as the endpoint for the application.
The Ingress (I removed the minio part):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "appname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version| replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
cert-manager.io/cluster-issuer: {{ .Values.leIssuer }}
acme.cert-manager.io/http01-edit-in-place: "true"
{{- if .Values.ingress.annotations }}
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
{{- with .Values.ingress.modSecurity }}
{{- if .enabled }}
nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$server_name-$request_id"
nginx.ingress.kubernetes.io/modsecurity-snippet: |
SecRuleEngine {{ .secRuleEngine | default "DetectionOnly" | title }}
{{- range $rule := .secRules }}
{{ (include "secrule" $rule) | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.prometheus.metrics }}
nginx.ingress.kubernetes.io/server-snippet: |-
location /metrics {
deny all;
}
{{- end }}
spec:
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
{{- if .Values.service.commonName }}
- {{ template "hostname" .Values.service.commonName }}
{{- end }}
- {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<<<
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- {{ $host }}
{{- end -}}
{{- end }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-cert" (include "fullname" .)) }}
{{- end }}
rules:
- host: {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<
http:
&httpRule
paths:
- path: /
backend:
serviceName: {{ template "fullname" . }}
servicePort: {{ .Values.service.externalPort }}
{{- if .Values.service.commonName }}
- host: {{ template "hostname" .Values.service.commonName }}
http:
<<: *httpRule
{{- end -}}
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- host: {{ $host }}
http:
<<: *httpRule
{{- end -}}
{{- end -}}
What I have done so far:
removed deployment from cluster, cleared the Gitlab runners cache, cleared the Gitlab cluster cache. Deleted the environment (stop and delete). Created a new environment 'production' with the right URL 'Operations>Environments>production>Edit'. After push the url has been replaced with the wrong one.
hard coded the url in Ingress (at the arrows in the snippet), it worked
changed the value in gitlab-ci.yml without http://. No result.
check the use of 'apply production secret configuration' in the gitlab-ci.yml, by adding echo 'message!'. Conclusion: this part of the file is used for production
A CICD variable Settings>CICD: GITLAB_ENVIRONMENT_URL. No effect.
UPDATE:
Maybe the .Values.gitlab.app is used for the URL.
The file .gitlab-ci.yml includes a template which overrides the value.
//.gitlab-ci.yml
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
The override in the template:
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN <<<<<<<<<<<<<<
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
Related
I am trying to install akv28s secrets using helm template but it fails, I am unable to diagnose the issue in helm, have tried online yaml validators but no help.
Using --debug flags renders me the expected manifest
values.yaml
akv2k8s:
enabled: true
vaults:
vaultcmms:
secretkey: secretvalue
secretkey1: secretvalue1
vaulttenant:
secretkey: secretvalue
secretkey1: secretvalue2
akv28s.yaml
{{- if .Values.akv2k8s.enabled -}}
{{- range $vault, $content := .Values.akv2k8s.vaults }}
{{- range $key, $value := $content }}
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
spec:
vault: {{ $vault }}
name: {{ $key}}
object:
name: {{ $value}}
type: secret
{{- end }}
{{- end }}
{{- end }}
I was making a mistake by specifying the vault value in the wrong hierarchy
It should be like
spec:
vault:
name: {{ $vault }}
object:
name: {{ $value }}
type: secret
This solved my issue.
I'm looking into a new update to my kubernetes cluster in Azure. However, I'm not sure how to do this. I have been able to build an ingress controller like this one:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "test.fullname" . -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "test.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ .port }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ .port }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
My values is the following:
replicaCount: 1
image:
repository: test01.azurecr.io/test
tag: update1
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2000
targetPort: http
protocol: TCP
ingress:
enabled: true
className: ""
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
hosts:
- host: test.com
paths:
- path: /test
pathType: Prefix
port: 80
tls: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
My pod is ready and it seems that the service is ready. However, the test.com domain is not working. I added a DNS record for my domain and I used my cluster's IP to make sure the domain will be available. However, I still have an issue to see the domain the error message is the following:
Connection timed out && This site can’t be reached
Does anyone knows any better workaround to this?
In Kubernetes you have Ingress Controllers and Ingress resources. What you have is the definition of an Ingress, not an Ingress Controller. An Ingress will not work unless there is an Ingress Controller installed in your cluster.
However, in AKS (Azure Kubernetes Service), it is possible to bind your Ingress resources to an Azure Application Gateway, which is an Azure resource outside of your cluster.
To achieve this you need AGIC (Application Gateway Ingress Controller) which will be in charge of forwarding your Ingress configuration to the Application Gateway. You have already achieved this partially by adding these annotations on the Ingress resources you want to have configured there:
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
Summary:
You have two options:
Install an Ingress Controller such as nginx or traefik and adapt the annotations on your Ingress resources accordingly.
Make sure you have an Application Gateway deployed in your subscription, AGIC installed in your cluster, and all the configuration needed to allow AGIC to modify the Application Gateway.
If it is the first time you are working with Ingresses and Azure, I strongly recommend you follow the first option.
I want to apply yaml in a Helm pre-delete hook kind of like here (https://helm.sh/docs/topics/charts_hooks/)
I have this yaml that I want to apply in a Helm hook
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-backup-name # unique name of backup
namespace: dev
spec:
backupLocation: s3backup-default
namespaces:
- stage-database
reclaimPolicy: Delete
selectors:
cluster-name: mycluster
preExecRule:
postExecRule:
Is it possible to just apply a Yaml in a hook? The following is what I have:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: database-backup-job
image: "alpine:3.3"
command: ["/bin/sleep","{{ default "10" .Values.sleepyTime }}"]
So instead of a image I want to execute the yaml. Is that possible?
Update:
I added that yaml and added the annotations with hook but it just doesn't seem to do anything when I do helm delete on the app:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-bkup-june-3-2021 # unique name of backup
namespace: {{ $dbNamespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backupLocation: s3backup-default
namespaces:
- {{ $dbNamespace }}
reclaimPolicy: Delete
selectors:
cluster-name: {{ $dbClusterName }}
preExecRule:
postExecRule:
How to print new line character when sending emails? I'm sending it to gmail. The character \n prints literally. I even tried </br> tag and yaml mutliline and none of them work.
- alert: KubernetesPodImagePullBackOff
expr: kube_pod_container_status_waiting_reason{reason=~"ContainerCreating|CrashLoopBackOff|ErrImagePull|ImagePullBackOff"} > 0
for: 1s
labels:
severity: warning
annotations:
summary: "Kubernetes pod crash looping (instance {{ $labels.instance }}"
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
You need to rewrite default template in Alertmanager for E-mails.
You need to replace something like
{{ .Annotations.description }}
in template by
{{ .Annotations.description | safeHtml }}
I wrote for my own email template, if you have not your own, you may create it from
https://github.com/prometheus/alertmanager/blob/master/template/default.tmpl
and edit
{{ range .Annotations.SortedPairs }} - {{ .Name }} = {{ .Value }}
in the same manner with
{{ .Value | safeHtml }}
Also read this answer
prometheus using html content in alerts annotations and using it in email template
I am trying to configure the slack notification from Prometheus alertmanager with below yml.
global:
resolve_timeout: 1m
slack_api_url: 'https://hooks.slack.com/services/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'
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#monitoring-instances'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
When i start my alert-manager container it keep on restarting and shows below error.
alertmanager | level=error ts=2021-01-12T04:08:19.040Z caller=coordinator.go:124 component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/alertmanager.yml err="yaml: invalid leading UTF-8 octet"
Have validated Here it shown as valid yaml
Also checked with notepad++ the encoding already showing as UTF-8 Any other way to fix this?
Even this code also shows same error.
slack_configs:
- channel: '#monitoring-instances'
send_resolved: false
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
I am using Centos-8.2 system, is it something wrong in my system? Can anyone help me out here.
In my occasion there was an issue with the config/application.yml
It was all Gibberish so I had to deleted it and recreate it.
After that the issue was resolved.
Resolved by removing bullet • in the code and launched the container. It works now.