When I give input for schedule parameter * * * * * the oc export fails with invalid character.
Error: YAML parse error on middleware/templates/cyes-rest/timed-out-streams-cron-job.yaml: error converting YAML to JSON: yaml: line 12: did not find expected alphabetic or numeric character
timedOutStreamsCronJob:
enabled: true
schedule: '* * * * *'
timed-out-streams-cron-job.yaml
{{- if and .Values.cyes.enabled .Values.cyes.timedOutStreamsCronJob.enabled -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "middleware.fullname" . }}-timed-out-streams-cron-job
labels:
{{- include "middleware.labels" . | nindent 4 }}
spec:
schedule: {{.Values.cyes.timedOutStreamsCronJob.schedule}}
jobTemplate:
spec:
template:
spec:
containers:
- name: sync-cron
image: {{.Values.cyes.timedOutStreamsCronJob.image.repository}}:{{.Values.cyes.timedOutStreamsCronJob.image.tag}}
args:
- wget
- -O-
- http://{{ .Release.Name }}-{{.Values.cyes.name}}-service:{{.Values.cyes.port}}/checkStreams
restartPolicy: OnFailure
startingDeadlineSeconds: 3600
{{- end -}}
I believe we are looking at a formatting issue regarding the quotes, as CronJobs should look like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
[..]
So your spec should likely look like this (note the "):
spec:
schedule: "{{.Values.cyes.timedOutStreamsCronJob.schedule}}"
Related
I have been able to successfully create a cron job for my OpenShift 3 project. The project is a lift and shift from an existing Linux web server. Part of the existing application requires several cron tasks to run. The one I am looking at the moment is a daily update to the applications database. As part of the execution of the cron job I want to write to a log file. There is already a PV/PVC defined for the main application and I was intending to use that hold the logs for my cron job but it seems the cron job is not being provided access to the PV.
I am using the following inProgress.yml for the definition of the cron job
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: in-progress
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobInProgress"
spec:
containers:
- name: in-progress
image: <image name>
command: ["php", "inProgress.php"]
restartPolicy: OnFailure
volumeMounts:
- mountPath: /data-pv
name: log-vol
volumes:
- name: log-vol
persistentVolumeClaim:
claimName: data-pv
I am using the following command to create the cron job
oc create -f inProgress.yml
PHP Warning: fopen(/data-pv/logs/2022-04-27-app.log): failed to open
stream: No such file or directory in
/opt/app-root/src/errorHandler.php on line 75 WARNING: [2] mkdir():
Permission denied, line 80 in file
/opt/app-root/src/errorLogger.php WARNING: [2]
fopen(/data-pv/logs/2022-04-27-inprogress.log): failed to open stream:
No such file or directory, line 60 in file
/opt/app-root/src/errorLogger.php
Looking at the yml for pod that is executed, there is no mention of data-pv - it appears as though secret volumeMount, which has been added by OpenShift, is removing any further volumeMounts.
apiVersion: v1 kind: Pod metadata: annotations:
openshift.io/scc: restricted creationTimestamp: '2022-04-27T13:25:04Z' generateName: in-progress-1651065900- ...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n9jsw
readOnly: true ... volumes:
- name: default-token-n9jsw
secret:
defaultMode: 420
secretName: default-token-n9jsw
How can I access the PV from within the cron job?
Your manifest is incorrect. The volumes block needs to be part of the spec.jobTemplate.spec.template.spec, that is, it needs to be indented at the same level as spec.jobTemplate.spec.template.spec.containers. In its current position it is invisible to OpenShift. See e.g. this pod example.
Similarly, volumeMounts and restartPolicy are arguments to the container block, and need to be indented accordingly.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: in-progress
spec:
schedule: '*/5 * * * *'
concurrencyPolicy: Replace
startingDeadlineSeconds: 200
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: cronjobInProgress
spec:
containers:
- name: in-progress
image: <image name>
command:
- php
- inProgress.php
restartPolicy: OnFailure
volumeMounts:
- mountPath: /data-pv
name: log-vol
volumes:
- name: log-vol
persistentVolumeClaim:
claimName: data-pv
Thanks for the informative response larsks.
OpenShift displayed the following when I copied your manifest suggestions
$ oc create -f InProgress.yml The CronJob "in-progress" is invalid:
spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value:
"Always": supported values: "OnFailure", " Never"
As your answer was very helpful I was able to resolve this problem by moving restartPolicy: OnFailure so the final manifest is below.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: in-progress
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobInProgress"
spec:
restartPolicy: OnFailure
containers:
- name: in-progress
image: <image name>
command: ["php", "updateToInProgress.php"]
volumeMounts:
- mountPath: /data-pv
name: log-vol
volumes:
- name: log-vol
persistentVolumeClaim:
claimName: data-pv
I have a CronWorkflow that sends the following metric:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: my-cron-wf
spec:
schedule: "0 * * * *"
suspend: false
workflowSpec:
metrics:
prometheus:
- name: wf_exec_duration_gauge
help: "Duration gauge by workflow name and status"
labels:
- key: name
value: my-cron-wf
- key: status
value: "{{workflow.status}}"
gauge:
value: "{{workflow.duration}}"
I would like to populate the metric's label name with the CronWorkflow name using a variable in order to avoid copying it but I didn't find a variable for it.
I tried to use {{workflow.name}} but it equals to the generated workflow name and not to the desired CronWorkflow name.
I use Kustomize to manage argo workflows resources so if there is a kustomize-way to achieve this it would be great as well.
Argo Workflows automatically adds the name of the Cron Workflow as a label on the workflow. That label is accessible as a variable.
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: my-cron-wf
spec:
schedule: "0 * * * *"
suspend: false
workflowSpec:
metrics:
prometheus:
- name: wf_exec_duration_gauge
help: "Duration gauge by workflow name and status"
labels:
- key: name
value: "{{workflow.labels.workflows.argoproj.io/cron-workflow}}"
- key: status
value: "{{workflow.status}}"
gauge:
value: "{{workflow.duration}}"
I want to apply yaml in a Helm pre-delete hook kind of like here (https://helm.sh/docs/topics/charts_hooks/)
I have this yaml that I want to apply in a Helm hook
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-backup-name # unique name of backup
namespace: dev
spec:
backupLocation: s3backup-default
namespaces:
- stage-database
reclaimPolicy: Delete
selectors:
cluster-name: mycluster
preExecRule:
postExecRule:
Is it possible to just apply a Yaml in a hook? The following is what I have:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: database-backup-job
image: "alpine:3.3"
command: ["/bin/sleep","{{ default "10" .Values.sleepyTime }}"]
So instead of a image I want to execute the yaml. Is that possible?
Update:
I added that yaml and added the annotations with hook but it just doesn't seem to do anything when I do helm delete on the app:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationBackup
metadata:
name: my-bkup-june-3-2021 # unique name of backup
namespace: {{ $dbNamespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backupLocation: s3backup-default
namespaces:
- {{ $dbNamespace }}
reclaimPolicy: Delete
selectors:
cluster-name: {{ $dbClusterName }}
preExecRule:
postExecRule:
I have 4 Microservices they all have different names different images and different container ports and service ports, I took this piece of code in one of the answer from stack overflow now what this piece of code is doing is creating my 4 deployments with it's names and images but I am unable to create 4 container according to their ports and resources.
my main goal is to create a master template where I can just put few values and it can handle new manifest of new microservice instead of play with bunch of manifest separately.
deployment.yaml
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ . }}
labels:
environment: {{ $.Values.environment }}
app: {{ . }}
aadpodidbinding: podid-{{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
replicas: {{ $.Values.replicaCount }}
selector:
matchLabels:
app: {{ . }}
template:
metadata:
labels:
app: {{ . }}
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: {{ . }}
image: mycr.azurecr.io/master/{{ . }}:{{ $.Values.image.tag }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
resources:
{{- range $.Values.high.resources }}
---
{{- end }}
{{- end }}
{{ end }}
values.yaml
replicaCount: 1
image:
# repository: nginx
pullPolicy: IfNotPresent
# # Overrides the image tag whose default is the chart appVersion.
tag: "latest"
componentTests:
- service01
- service02
- service03
- service04
environment: QA
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
services:
- service01
- service02
- service03
# serviceAccount:
# # Specifies whether a service account should be created
# create: true
# # Annotations to add to the service account
# annotations: {}
# # The name of the service account to use.
# # If not set and create is true, a name is generated using the fullname template
# name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# service:
# type: ClusterIP
# port: 80
# ingress:
# enabled: false
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# hosts:
# - host: chart-example.local
# paths: []
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
high:
resources:
requests:
cpu: 350m
memory: 800Mi
limits:
cpu: 400m
memory: 850Mi
medium:
resources:
requests:
cpu: 200m
memory: 650Mi
limits:
cpu: 250m
memory: 700Mi
low:
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 150m
memory: 550Mi
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 4
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
tolerations: []
affinity: {}
output
MANIFEST:
---
# Source: test/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
environment: QA
app: service01
aadpodidbinding: podid-service01
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02
labels:
environment: QA
app: service02
aadpodidbinding: podid-service02
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02
template:
metadata:
labels:
app: service02
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02
image: mycr.azurecr.io/master/service02:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service02-ui
labels:
environment: QA
app: service02-ui
aadpodidbinding: podid-service02-ui
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service02-ui
template:
metadata:
labels:
app: service02-ui
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service02-ui
image: mycr.azurecr.io/master/service02-ui:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service03
labels:
environment: QA
app: service03
aadpodidbinding: podid-service03
chart: test-0.1.1
release: api
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: service03
template:
metadata:
labels:
app: service03
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service03
image: mycr.azurecr.io/master/service03:latest
imagePullPolicy: IfNotPresent
resources:
---
# Source: test/templates/deployment.yaml
service01.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service01
labels:
aadpodidbinding: podid-service01
spec:
replicas: 1
selector:
matchLabels:
app: service01
template:
metadata:
labels:
app: service01
aadpodidbinding: podid-service01
annotations:
build: "2020102901"
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: service01
image: mycr.azurecr.io/master/service01:latest
resources:
requests:
cpu: 250m
memory: "700Mi"
limits:
memory: "700Mi"
ports:
- containerPort: 7474
env:
- name: KEY_VAULT_ID
value: "key-vault"
- name: AZURE_ACCOUNT_NAME
value: "storage"
readinessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
livenessProbe:
httpGet:
path: /actuator/health
port: 7474
scheme: HTTP
httpHeaders:
- name: service-id
value: root
- name: request-id
value: healthcheck
initialDelaySeconds: 60
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: service01
spec:
ports:
- port: 7474
name: main
# - port: 9999
# name: health
selector:
app: service01
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: service01
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: service01
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I don't know in what context that Helm chart was created, but from what I understand it was used for Integration testing or something like that. It is certainly not what you want to use for your use case. You better use an Helm Chart that generate the manifests for only one services and then you'll be able to reuse that Chart for all your services. It means that you will do multiple helm install with different values instead of one helm install that created all your services. With a big chart like that, you'll need to update the chart everytime you add a new services.
You'll have :
helm install -f service01-values.yaml ./mychart
helm install -f service02-values.yaml ./mychart
helm install -f service03-values.yaml ./mychart
helm install -f service04-values.yaml ./mychart
instead of :
helm install -f values.yaml ./mychart
To be able to do this, you'll need to change your chart a little bit and remove the loop {{- range .Values.componentTests }}. Learn how to build a chart, it is easyer that you think : Create Your First Helm Chart
I am trying to run a cronjob in kubernetes, but keep to having these two errors:
type: 'Warning' reason: 'FailedCreate' Error creating job: jobs.batch "dev-cron-1516702680" already exists
and
type: 'Warning' reason: 'FailedCreate' Error creating job: Timeout: request did not complete within allowed duration
Below are my cronjob yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-01-23T09:45:10Z
name: dev-cron
namespace: dev
resourceVersion: "16768201"
selfLink: /apis/batch/v1beta1/namespaces/dev/cronjobs/dev-cron
uid: 1a32eb94-0022-11e8-9256-065eb556d6a2
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- for country in th;
- do
- 'curl -X POST -d "{'footprint':'xxxx-xxxx'}"-H "Content-Type: application/json" https://dev.xxx.com/xxx/xxx'
- done
image: appropriate/curl:latest
imagePullPolicy: Always
name: cron
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '* * * * *'
startingDeadlineSeconds: 10
successfulJobsHistoryLimit: 3
suspend: false
status: {}
I am not sure why this is keep happening. I am running Kubernetes version 1.9.1, in AWS cluster. Any idea why?
It turned out to happen because there is an auto injector by Istio initializer. Once I disabled Istio initializer injection for cronjobs, it works fine.